text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Iterative peak combination: a robust technique for identifying relevant features in medical image histograms
Histogram-based methods can be used to analyse and transform medical images. Histogram specification is one such method which has been widely used to transform the histograms of cone beam CT (CBCT) images to match those of corresponding CT images. However, when the derived transformation is applied to the CBCT image pixels, significant artefacts can be produced. We propose the iterative peak combination algorithm, a novel and robust method for automatically identifying relevant features in medical image histograms. The procedure is conceptually simple and can be applied equally well to both CT and CBCT image histograms. We also demonstrate how iterative peak combination can be used to transform CBCT images in such as way as to improve the Hounsfield Unit (HU) calibration of CBCT image pixel values, without introducing additional artefacts. We analyse 36 pelvis CBCT images and show that the average difference in fat tissue pixel values between CT images and CBCT images processed using the iterative peak combination algorithm is 23.7 HU. Compared to 136.7 HU in unprocessed CBCT images and 50.9 in CBCT images processed using histogram specification.
Introduction
The question we address in this work is one of transforming a cone beam CT (CBCT) (Jaffray et al 2002, Létourneau et al 2005 image histogram to more closely resemble that of a corresponding planning CT image histogram. This is a commonlyoccurring problem, given the well-studied artefacts (Siewerdsen and Jaffray 1999, 2001, Poludniowski et al 2011 in CBCT images which often result in image pixel values that do not correspond to Hounsfield Units (HU) (Fotina et al 2012).
Histogram-based methods are widely-used tools for analysing and processing medical images and have been suggested as a method to improve HU calibration in CBCT images. A typical method for addressing this problem is the use of histogram specification, also known as histogram matching, algorithms which aim to transform one image so that its histogram is the same as another. Cumulative frequency distributions are calculated from image histograms and used to determine how to adjust the shape of the source image (CBCT) histogram to match the reference image (CT) histogram. Amit and Purdie (2015), for example, use histogram specification to enable CBCT images to be used in a procedure which automatically generates radiotherapy treatment plans for breast cancer patients; Park et al (2017) use histogram specification to improve the performance of CT-CBCT deformable image registration; Zhang et al (2017) use histogram matching to allow CBCT images to be used for radiotherapy dose calculations; and Arai et al (2017) use histogram specification to enable CBCT images to be used for proton therapy dose calculations.
The main deficiencies of the histogram specification algorithm when used in this context are that CT image histograms contain far more detail than CBCT histograms, and that there may be anatomical changes in the patient between the acquisition of the CT and CBCT images. When histogram specification is used to transform a CBCT image histogram to match a CT histogram, detail must be added to the CBCT histogram. However there is no correspondence between the voxels entering a bin of a histogram and their Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. spatial location in the image. Therefore the detail that is added, for example increased contrast between fat tissue and muscle tissue, is unlikely to correspond to the location in the CBCT image where the detail is required. Anatomical changes in the patient between CT and CBCT imaging (e.g. weight loss), or different fields of view, will result in different numbers of voxels representing each tissue or material being present in each of the images. Therefore the true CBCT histogram is not expected to exactly match that of the CT image. Forcing the histogram of one image to match the other will risk artificially changing one material to another in some voxels.
We approach the problem from the opposite direction; using the CBCT image as the target and distilling the information in the CT histogram until it resembles the CBCT. It then becomes more straightforward to compare the CT and CBCT histograms, and a more simple transformation can be defined which maps the CBCT image histogram to the CT. This approach has previously been taken by Marchant et al (2008), where the CT image histogram was simply smoothed until sufficient detail had been removed, and by Thing et al (2016), who used histograms with wide bins to reduce the amount of detail and enable the identification of peaks in the histograms corresponding to simply soft tissue and air. However, it is likely that the amount of smoothing or number of histogram bins used would need to be adjusted based on the particular patient, the anatomical site of the image or the acquisition parameters of the scanner.
In view of the recent interest in image correction methods based on histogram features, particularly in the field of CBCT imaging, it is important to find improved methods to extract features and map one image histogram to another. Therefore we propose an automatic method for identifying relevant features present in histograms of CBCT and corresponding planning CT images. The iterative peak combination algorithm has a single adjustable parameter and is conceptually very simple. We then use the results of the iterative peak combination algorithm to determine a linear transformation which scales pelvis CBCT image histograms to match the corresponding CT histograms. The same transformation is then applied to the CBCT image pixel values. An automated analysis of the raw CBCT image pixel values is performed and compared to results obtained after using iterative peak combination and histogram specification to process the CBCT images.
Image histograms
The image histograms created in this work are comprised of bins, each with a bin width, and defined by two bin edges e i low and e i high . The height of each bin corresponds to the frequency, f i , of image pixels falling into that bin. For a given image, the image histogram is created with a chosen bin width. The number of histogram bins is determined by finding the range of pixel values present in the image and dividing by the bin width. Every pixel in the image is then looped over to calculate which bin the pixel value falls into and the frequency of that bin is incremented by one.
The first bin of the histogram typically contains entries corresponding to pixels which fall outside the field-of-view of the scanner that acquired the image, and which all have the same value. The frequency of the first bin is therefore automatically set to zero before continuing.
Histogram peak identification
A peak, p, in the histogram is defined as being located at the point at the centre of a bin that has frequency larger than both neighbouring bins, i.e. This definition is clearly susceptible to finding spurious peaks due to noise in the histogram. This problem can be mitigated by using a larger bin width when creating the image histogram. However, the iterative peak combination technique described below is robust against peaks produced by noise. The use of wider histogram bins, or additional histogram smoothing, is therefore not required.
Once all peaks in the histogram have been identified, the peaks are iteratively combined using the following procedure.
Iterative peak combination
Iterative peak combination is inspired by the problem of jet-finding in high-energy particle physics (Salam 2010). The theory underlying jet-finding is very well-motivated for the applications in particle physics, and is not relevant to the discussion here. However the general idea of the technique-transforming a large amount of information into a small number of salient features-is simple and easily adapted to the problem of locating relevant peaks in histograms of medical images.
The peak combination algorithm proceeds as follows: (i) Calculate the distance d i j , between all peaks p i and p j (see equation (2)).
(iii) Combine the two closest peaks p i min and p j min to form peak p ij (see equation (3)).
(iv) Remove peaks p i min and p j min from the list of peaks.
(v) Add combined peak p ij to list of peaks.
(vi) Repeat until number of remaining peaks equals n target .
The distance d i j , is defined as where p i x , and p i y , are the location (in HU) and height, respectively, of peak i.
The location of the combined peak, p ij x , , is constructed by taking the average of the locations of peaks i and j, weighted by the height of the peaks. The height p ij y , is calculated by linearly interpolating between peaks i and j at the location p ij x , , The histogram-scaled CBCT image is then produced by applying the transformation T to each pixel in the image.
Histogram specification
Histogram specification, also known as histogram matching, is a widely-used technique for transforming the histogram of one image to match that of a specified reference image (Gonzales and Woods 2008). The cumulative frequency distributions of the CT and CBCT histograms are used to calculate a transformation that transforms a given CBCT histogram bin frequency to match the corresponding bin of the CT histogram. We refer to the resulting image as the histogram-specified image.
In this study we use the implementation of histogram specification provided in the Insight Toolk-it (Yoo et al 2002).
Histogram-transformed image comparison
CT and CBCT images of patients undergoing radiotherapy of the pelvis area are used to demonstrate and test the histogram transformation techniques. The pixel values in the histogram-scaled and histogramspecified CBCT images are compared to those in the corresponding CT image to determine how accurately the HU calibration of the images is recovered by matching the image histograms.
An automated analysis of the pixel values, described in Joshi et al (2017), is used to select and compare regions of pixels representing fat tissue in the CT and CBCT images. In brief: small circular regions of fat tissue in the CT image are automatically and randomly selected by identifying pixels with values between −100 and 0HU. The regions are then transferred to the CBCT image and are kept for analysis if they contain no air pixels and have a sufficiently small standard deviation, indicating that they contain pixels corresponding the one type of tissue. The mean value of all pixels in all regions of the image is then compared between CT and CBCT.
2.6. Image acquisition CT images used in this study were acquired using Siemens Somatom definition AS and Philips Brilliance Big Bore CT scanners. CBCT images were acquired using the Elekta XVI imaging system (XVI software version 4.5, Elekta, Crawley, UK). CT images of twelve patients, and three CBCT images per patient-36 CBCT images in total-were selected for analysis. All images were anonymised prior to processing and analysis.
Histogram peak identification
Example slices of the CT and CBCT pelvis images used to test the histogram transformation techniques are shown in figures 1(a) and (b) respectively 3 . Histograms created using pixels from all slices of the image volumes are shown in the lower panels.
The number and location of the peaks in the histogram, defined using equation (1), is susceptible to noise and depends on the bin width of the histogram. Figures 2(a) and (b) show the peaks identified in histograms of the CT image volume created using bin widths of 5HU and 30HU respectively.
Using a narrow bin width increases the smoothness of the histogram in areas where there are a significant number of pixels, for example the region around 1000HU where pixels represent soft tissue such as subcutaneous fat and muscle. However the noise in the histogram is also dramatically increased; there are far more peaks found which are not caused by any anatomical features. Figure 3 shows the final six steps of the procedure outlined in section 2.3 when applied to the histogram shown in figure 2(b), with n 2 target = . At each iteration of the procedure, the pair of peaks that are closest to each other, as defined by equation (2), are shown as solid red lines. The remaining peaks are shown as dashed blue lines. In the following iteration the two closest peaks are combined using equation (3).
Iterative peak combination
In the first thirteen iterations of the procedure the peaks corresponding to noise, located between 1750 and 4000HU, are combined to produce a peak at around 2350HU. It takes a further six iterations before the number of peaks have been reduced to n 2 target = , at which point we have a peak corresponding approximately to 'air' at around 50HU and one corresponding approximately to 'tissue' at around 1000HU. Figure 4(a) shows the results of applying the peak combination procedure to the histogram shown in figure 2(a). The procedure is robust and not affected by the large number of peaks produced by noise, other than needing a much larger number of iterations to combine the peaks until there are just n 2 target = . The locations of the peaks obtained by algorithm are almost independent of the width of the bins used to create the histogram. For example the peak corresponding to tissue pixels has a HU value of 1009.5 when a histogram with bin width of 30 is used, and 1008.7 when a histogram of bin width 5 is used.
In figure 4(b) the iterative peak combination procedure is applied to the histogram of the CBCT image shown in figure 1(b). The air and tissue peaks are located after only a small number of iterations.
Histogram-transformed images
Having located air and tissue peaks on both the CT and CBCT image histograms (see figure 4), equation (4) can be used to define a linear transformation to map the CBCT image histogram peaks to those in the CT histogram. The same transformation is then applied to each pixel in the CBCT image volume. Figures 5(a) and (b) respectively show results of applying the iterative peak combination and histogram specification algorithms to the CBCT image shown in figure 1(b). The histogram specification algorithm causes the resulting image histogram ( figure 5(b)) to very closely resemble the histogram used as a reference ( figure 1(a)). However, the histogram-specified image has reduced contrast and detail, and has additional artefacts compared to the histogram-scaled image shown in figure 5(a). Figure 6 shows the results of the pixel value analysis described in section 2.5. Each entry in the histogram represents the result of running the automated pixel value analysis on a particular image. The bin in which the entry is placed is determined by the average difference in fat tissue pixel values between the CBCT image and corresponding CT image. The pixels representing regions of fat in the raw CBCT images have pixel values that are on average around 136 HU different from the values in the corresponding CT images. When using the iterative peak combination algorithm to scale the histograms of the CBCT images the mean absolute difference in mean fat tissue pixel value is reduced to 23.7 HU. Note that after the iterative peak combination method has been used to identify relevant features of the histograms, the subsequent linear transformation applied to the CBCT image affects all pixel values equally. Therefore the pixels representing tissue other than fat, e.g. muscle, will be scaled in the same way.
Pixel value analysis
The histogram specification algorithm also produces CBCT images which have fat tissue pixels with values closer to those in the corresponding CT images. With a mean absolute difference in mean fat tissue pixel value of 50.9HU the histogram-specified CBCT images do not resemble the CT images as closely as the CBCT images processed with the iterative peak combination algorithm. Additionally there are a small number of CBCT images which are poorer quality after being processed by the histogram specification algorithm, with mean differences in fat tissue pixel value of around 220HU. An example slice of one of these images is shown in figure 7. Despite having a histogram which closely matches the shape of the corresponding CT image histogram, the histogram-specified image ( figure 7(d)) exhibits many artefacts. Large areas of pixels representing fat tissue, along the posterior surface of the patient in particular, have values which are much too small after the application of histogram specification.
The results shown here for both the iterative peak combination and histogram specification methods were calculated using full CT and CBCT images without any initial registration or cropping. Differences in the imaged field of view (FOV) between CT and CBCT may contribute to the larger pixel value differences observed for the histogram-specified images, because the exact histogram shape will depend on the extent of the patient imaged. However the iterative peak combination method is not affected by differences in FOV because it only uses the positions of the major peaks in the histogram, not the full shape.
Discussion
In this report we present a novel method for analysing the histograms of CT and CBCT images, based on tools used to perform jet-finding in high energy particle physics. We demonstrated the use of this method to derive a scaling which matches CBCT to CT images based on their histograms. CBCT images that have been corrected in this way can be used to improve accuracy of radiotherapy dose calculations based on CBCT, or to improve image registration between CBCT and CT. We have used the iterative peak combination algorithm as an initial step in the shading correction of CBCT images (Marchant et al 2008). In this context the method has been shown to work robustly for images at different anatomical sites (pelvis, thorax, head and neck), and acquired with different fields of view .
CBCT images often contain artefacts and nonuniformities which result in pixels representing the same type of tissue having different values depending on their location in the image. The pixels, therefore, will occupy different bins of a histogram and will be affected differently by an algorithm such as histogram specification. The iterative peak combination procedure has been designed to keep in mind the relation between the features in CT and CBCT image histograms and the image structures which produced them.
When used in the context of jet-finding, equation (2) contains additional parameters, one of which controls the order that so-called pseudo-jets are combined: When p 1 = + we have our equation (2) and small peaks are combined before larger ones. Setting p 1 =would cause larger peaks to be combined before shorter ones, and with p = 0 we would ignore the height of the peaks and merge those that have HU values closest to each other. When used in jet-finding there are theoretically well-motivated reasons for the parameter p to take on each of the values, in different situations. We explored all three options in this work and have found that when iteratively combining the peaks of medical image histograms only a value of p 1 = + gives reasonable, robust results.
Our definition of a peak in a histogram (equation (1)) is sensitive to the bin width used to define the histogram, and thus the amount of noise present in the histogram, as illustrated in figure 2. However the iterative peak combination algorithm is robust against noise in the histograms, producing reliable results even when unrealistically large numbers of peaks are defined. The histogram specification algorithm is sensitive to noise in the histograms. A histogram of a real CT image is unlikely to have noise sufficient to obscure the peaks and important features of the histogram. However is it easy to imagine a case where the CT image histogram contains a large amount of noise that is transferred to the CBCT image histogram with undesirable results.
When defining our histograms we chose to use fixed-width bins. Variable-width bins could be used to provide more detail in regions of the histogram with higher bin frequencies, and reduce the impact of noise in regions with few entries per bin. The type of binning used would not affect the iterative peak combination procedure. Thing et al (2016) describe the use of histogram scaling as part of a procedure to recover HU accuracy in CBCT images. Smoothed histograms, created with 100 bins, were calculated using clinical and simulated CBCT projection images. Soft tissue and air peaks were identified and used to derive a calibration function similar to the linear function shown in equation (4). The number of bins used was sufficient to remove noise in the CBCT histograms and elucidate the important features-soft tissue and air peaks. However the same number of bins is unlikely to work with a CT image, and would potentially need to be tweaked if a CBCT projection image was simulated with a different exposure. The iterative peak combination procedure, in contrast, can be used to locate any number of interesting peaks and can be applied to histograms created with any number of bins. Arai et al (2017) use histogram specification to improve the accuracy of proton therapy dose calculations on CBCT images. The modified CBCT images resulted in improved proton dose calculation accuracy compared to raw CBCT. However the histogram-specified CBCT images of head and neck cancer patients appear to show reduced contrast between the fat and muscle tissue and other artefacts similar to those visible in figures 5(b) and 7(d). In addition, the histogram specification procedure was not used in regions where the CT and CBCT images were most different: at and below the level of the shoulders. Instead the CT and CBCT images were cropped to remove the problematic regions.
Park et al (2017) overcome a similar problem by applying histogram specification to each transverse slice of a CBCT image, after it has been registered to a corresponding CT image. The iterative peak combination procedure can also be used in situations such as this, by registering the CT and CBCT images and deriving linear transformations for each transverse slice. Figure 8 shows an example of a head and neck CBCT image before and after being processed with the iterative peak combination algorithm applied on a slice-by-slice basis. Severe artefacts in the region below the shoulders are reduced after processing. The slight exaggeration of the non-uniformity in some areas of the image is a consequence of the linear transformation applied to the image pixels, rather than the iterative peak combination method itself. A linear transformation derived by any means would produce the same effect, since the contrast of a CBCT image is typically less than that of a corresponding CT image. Therefore the CBCT histogram must be 'stretched' in order for its features to align with those in the CT image. This stretching also broadens the peaks in the CBCT image, which causes any non-uniformities to be exacerbated. However, after processing, the mean position of the soft tissue peak on each slice will be more closely matched to the CT image.
In addition to slice-by-slice processing, splitting the CBCT image into small three-dimensional patches might allow the histogram scaling approach to better correct the non-uniformities present in CBCT images, which cannot be removed using the global transformation defined in equation (4). Amit and Purdie (2015) use histogram specification to process CBCT images of patients undergoing radiotherapy treatment of the breast. They investigate both global and, in cases where relevant structures have been delineated on the CBCT images, ROI-based histogram specification. The corrected CBCT images are then used as part of a procedure which automatically generates radiotherapy treatment plans. The treatment plans generated were of a clinically acceptable quality. However the histogram-specified CBCT images produced using both the global and ROI-based correction exhibit quite severe artefacts. An approach such as the iterative peak combination algorithm is likely to produce images which perform equally well when used for radiotherapy dose calculations, but with fewer artefacts.
Conclusion
We present an iterative peak combination algorithm for analysing histograms of CT and CBCT images. The algorithm has only one parameter that determines the stopping point of the iterative combination and defines how many peaks are identified in the histogram. The algorithm has been used to automatically and robustly identify air and soft tissue peaks in CT and CBCT histograms. The peak locations are used to define a linear transformation that is applied to the CBCT image pixels. The resulting transformed image has pixels with values that are more closely calibrated to the HU scale. The proposed procedure has been compared to another commonly-used histogrambased transformation method: histogram specification. Iterative peak combination has been shown to be more robust that histogram specification and produces CBCT images with fewer artefacts and pixel values that more closely resemble those in corresponding CT images. Analysis of 36 CBCT images shows a mean difference in fat tissue pixel values between CT and raw CBCT of 136.7HU, compared to 50.9HU for histogram-specified CBCT images and 23.7 for those processed using the iterative peak combination algorithm. Figure 8. Iterative peak combination applied to each transverse slice of a head and neck image. (a) Shows a sagittal slice of a raw CBCT image of a head and neck cancer patient. Significant artefacts are present in the region of the image at and below the level of the patient's shoulders. The peak in the histogram at around 600 is caused by these pixels. (b) Shows the same image after each transverse slice is processed using the iterative peak combination algorithm. The HU values in the region around the shoulders have been brought in line with the remainder of the image, though some artefacts remain. | 6,017.2 | 2017-11-27T00:00:00.000 | [
"Medicine",
"Physics"
] |
Glycycoumarin Sensitizes Liver Cancer Cells to ABT-737 by Targeting De Novo Lipogenesis and TOPK-Survivin Axis
Glycycoumarin (GCM) is a representative of bioactive coumarin compounds isolated from licorice, an edible and medicinal plant widely used for treating various diseases including liver diseases. The purpose of the present study is to examine the possibility of GCM as a sensitizer to improve the efficacy of BH3 mimetic ABT-737 against liver cancer. Three liver cancer cell lines (HepG2, Huh-7 and SMMC-7721) were used to evaluate the in vitro combinatory effect of ABT-737/GCM. HepG2 xenograft model was employed to assess the in vivo efficacy of ABT-737/GCM combination. Results showed that GCM was able to significantly sensitize liver cancer cells to ABT-737 in both in vitro and in vivo models. The enhanced efficacy by the combination of ABT-737 and GCM was attributed to the inactivation of T-LAK cell-originated protein kinase (TOPK)-survivin axis and inhibition of de novo lipogenesis. Our findings have identified induction of TOPK-survivin axis as a novel mechanism rendering cancer cells resistant to ABT-737. In addition, ABT-737-induced platelet toxicity was attenuated by the combination. The findings of the present study implicate that bioactive coumarin compound GCM holds great potential to be used as a novel chemo-enhancer to improve the efficacy of BH3 mimetic-based therapy.
Introduction
Disruption of mitochondrial membrane potential (MMP) is one of the key events in the apoptotic process [1]. MMP is tightly regulated by Bcl-2 family proteins including anti-apoptotic members of this family Bcl-2, Bcl-xl, Mcl-1, and pro-apoptotic members of this family Bax, Bak, Bim, Puma, Bid and Bad etc. [2]. The evasion of apoptosis is one of the hallmarks of cancer cells, which is commonly associated with aberrant expression of anti-apoptotic Bcl-2 family members [3]. Apoptosis defects not only promote cancer cell survival, but also render cancer cells refractory to therapeutic drugs [4,5]. Therefore, these Bcl-2 family anti-apoptotic proteins are considered to be rational targets for targeted cancer treatments. To this end, several small molecule inhibitors of anti-apoptotic Bcl-2 family proteins named BH3 mimetics have been developed [6]. These inhibitors can specifically bind to the hydrophobic groove of the anti-apoptotic Bcl-2 family proteins, leading to inhibition of their anti-apoptotic function and followed by activation of mitochondrial pathway. Among these inhibitors, ABT-737 and its orally available derivative ABT-263 are relatively well investigated [7,8].
Apoptosis Evaluation
Apoptosis was determined by flow cytometry following Annexin V/PI double staining of externalized phosphatidyl-serine (PS) in apoptotic cells using Annexin V/PI staining kit from MBL International Corporation (Boston, MA, USA).
Calculation of Combination Index
The synergistic effects between ABT-737 and GCM were quantitatively assessed by calculation of combination index (CI) using Chou-Talalay equation [19]. The cells were treated with various concentrations of ABT-737, GCM and their combination. The overall inhibitory effect was evaluated by Crystal Violet Staining described above. Combination index (CI) was calculated using the following Equation (1): where CA, x and CB, x are the concentrations of agent A and agent B used in combination to achieve x % combinatory effect; ICx, A and ICx, B are the concentrations for single agent to achieve the same effect. CI < 1, CI = 1 and CI > 1 indicate synergism, additive effect and antagonism respectively.
Western Blotting
The cell was lysed with ice-cold RIPA (radio-immuno-precipitation assay) buffer with protease inhibitor. Equal amount of proteins of the samples was loaded onto the gel. After electrophoretic separation, the proteins were transferred to a nitrocellulose membrane. The membrane was subsequently probed with primary antibodies following the incubation with corresponsive secondary antibody. The immune-reactive blots were visualized using enhanced chemi-luminescence.
RNA Interference
The cells were transfected with 7.5 nM of PBK/TOPK-siRNA and 50 nM survivin-siRNA or negative control siRNA using INTERFERin siRNA transfection reagent according to the manufacturer's instructions (Polyplus-Transfection, Illkirch, France). 24 h post-transfection, the cells were used for subsequent experiments.
Animal Study
The in vivo combinatory anti-cancer activity of GCM and ABT-737 were evaluated using HepG2 xenograft model. Animal Care and experimental protocols were approved by the Institutional Animal Care and Use Committee (China Agricultural University). Mice were housed in a pathogen-free barrier facility accredited by the Association for Assessment and all animal procedures were carried out in accordance with institutional guidelines for animal research. To establish the cancer xenograft, 2 × 10 6 HepG2 cells were mixed with Matrigel (50%) (Becton Dickinson, NJ, USA) and injected subcutaneous (s.c.) into the right flank of 6-7-week-old male BALB/c athymic nude mice (Charles River Laboratories). Tumors were measured with a caliper and tumor volumes were calculated using the following formula: 1/2(w1 × w2 × w2), where w1 is the largest tumor diameter and w2 is the smallest tumor diameter. When the tumor volume was up to about 100-120 mm 3 , GCM (10 mg/kg), Orlistat (100,150 mk/kg) and ABT-737(100 mg/kg) were given by intraperitoneal injection. GCM was given every day, ABT-737 and Orlistat were given every two days. GCM and Orlistat were dissolved with 5% tween 80. ABT-737 was dissolved with 30% propylene glycol, 5% Tween 80 and 65% D5W (5% dextrose in water). The body weight and tumor volume were evaluated every other day. The tumor tissues were collected and stored in −80 • C.
Measurement of Platelets Concentration
Mice were treated with 10 mg/kg GCM for 4 days and then 100 mg/kg ABT-7373 was given. Blood was collected after 4 h ABT-737 treatment and platelet concentration was assessed via blood routine test. This experimental design was based on the ref [15], in which a single IP injection of ABT-737 resulted in a significant reduction in platelet count within 4 h of administration.
Statistical Analysis
Data were presented as mean ± SD (standard derivation). These data were analyzed with the ANOVA with appropriate post-hoc comparison among means with Graph Pad Prism 6.0 (GraphPad Software, San Diego, CA, USA).
Combining GCM with ABT-737 Synergistically Induces Cell Death in Multiple Types of Liver Cancer Cells
To evaluate the combination effect of GCM and ABT-737 on liver cancer cells, HepG2 cells were first employed. The cells were treated with each agent alone or their combination at the concentrations indicated for 24 h and cell death was measured by Annexin v/PI staining. As shown in Figure 1A (left), exposure to 25 µM GCM did not increase cell death induction, whereas treatment with 12.5 µM ABT-737 caused a modest increase of cell death. Treatment with their combination resulted in a significantly enhanced cell death induction in HepG2 cells. To confirm this enhancement, two additional liver cancer cell lines SMMC-7721 and Huh-7 were tested and similar enhanced cytotoxicity by the combination was observed in both cell lines tested ( Figure 1A, middle and right). We next asked whether the enhanced effect was a synergistic action. HepG2 cells were treated with each agent alone or their combination in a fixed ratio of 2:1 (GCM:ABT-737) for 24 h and the cell viability was detected by crystal violet staining. As shown in Figure 1B (left), treatments with GCM (10-40 µM) or ABT-737 (5-20 µM) individually caused a dose-dependent inhibitory influence on cell viability, whereas the combinations induced significantly further decreased cell viability. The data were then used to calculate the combination index based on the Chou-Talalay method [19] and the result are shown in Figure 1B (right). The values of the combination index were less than 1 at all combinations tested, supporting a synergistic nature of the combination effect on liver cancer cells by GCM/ABT-737. To examine if the potentiated effect can also be found in non-malignant hepatocytes, we measured the changes of cell viability in response to GCM and/or ABT-737 in AML12 cells. As shown in Figure 1C, the enhanced effect was not detected by the same treatments in these non-tumorigenic mouse liver cells. These data suggest that the sensitization effect maybe specific for the tumor cells. To confirm this enhancement, two additional liver cancer cell lines SMMC-7721 and Huh-7 were tested and similar enhanced cytotoxicity by the combination was observed in both cell lines tested ( Figure 1A, middle and right). We next asked whether the enhanced effect was a synergistic action. HepG2 cells were treated with each agent alone or their combination in a fixed ratio of 2:1 (GCM:ABT-737) for 24 h and the cell viability was detected by crystal violet staining. As shown in Figure 1B (left), treatments with GCM (10-40 µM) or ABT-737 (5-20 µM) individually caused a dose-dependent inhibitory influence on cell viability, whereas the combinations induced significantly further decreased cell viability. The data were then used to calculate the combination index based on the Chou-Talalay method [19] and the result are shown in Figure 1B (right). The values of the combination index were less than 1 at all combinations tested, supporting a synergistic nature of the combination effect on liver cancer cells by GCM/ABT-737. To examine if the potentiated effect can also be found in non-malignant hepatocytes, we measured the changes of cell viability in response to GCM and/or ABT-737 in AML12 cells. As shown in Figure 1C, the enhanced effect was not detected by the same treatments in these non-tumorigenic mouse liver cells. These data suggest that the sensitization effect maybe specific for the tumor cells.
Co-Treatment of GCM and ABT-737 Results in Enhanced Tumor Growth Inhibition in Hepg2 Xenograft Model
Having found the synergistic effect of GCM/ABT-737 combination in the cell culture model, we next questioned if the enhancement action can be achieved in vivo. Treatments were initiated when the average tumor volume reached about 100 mm 3 as described in the Materials and methods. To increase the likelihood of detecting an enhanced combinatory effect, we used the doses of each agent alone that by themselves caused a modest tumor reduction based on our dose-finding experiment. As shown in Figure 2A,B, treatments with ABT-737 (100 mg/kg body weight, every two days) significantly inhibit tumor growth, leading to reduction of the final tumor weight by 20.1%, whereas a comparable inhibitory effect was achieved by the daily treatment with GCM (10 mg/kg body weight). Combining ABT-737 with GCM resulted in a further enhanced inhibitory effect on tumor growth, leading to decrease of the final tumor weight by 64.2%. The serum levels of ALT, a key biochemical marker of hepatotoxicity, were not significantly increased in the combination-treated mice compared with that found in ABT-737 treatment alone ( Figure 2C). Bodyweight did not show difference among the treatment groups ( Figure 2D). These results suggest that the combination was well tolerated by the mice. The data indicated that improvement of the therapeutic efficacy of ABT-737 in vivo can be accomplished by combining it with GCM.
Having found the improved efficacy of the combination, we then questioned whether the ABT-737-mediated platelet toxicity was also increased by the combination. Blood samples were collected, and platelet concentration was measured by blood routine test ( Figure 2E). As shown in Figure 2F, a significant reduced platelet count was observed in ABT-737-treated blood samples, whereas this change was ameliorated significantly by combining ABT-737 with GCM, suggesting that GCM protected against ABT-737-mediated toxicity on platelets instead of increasing the toxicity.
Down-Regulation of Survivin Is Involved in the Cell Death Induced by the Combination of GCM and ABT-737
Anti-apoptotic Bcl-2 family protein Mcl-1 has been identified as an important target for sensitizing cancer cells to ABT-737 [20]. We next asked whether this is the case for the present study. As shown in Figure 3A, neither GCM alone nor the combination decreased the expression of Mcl-1 in cell culture model. However, down-regulation of survivin, a key inhibitor of apoptosis protein (IAP), was observed in response to either GCM alone or the combination, whereas ABT-737 alone caused an increased survivin expression. We further validated the changes of survivin expression in the in vivo model. As shown in Figure 3B, ABT-737 alone induced a significant up-regulation of survivin expression in tumor tissues, whereas, the expression of survivin in tumor was significantly reduced by either GCM alone or the combination. To determine the role of the down-regulation of survivin in the sensitization of cancer cells to ABT-737, we assessed the influences of survivin knockdown on the cell viability and the results are shown in Figure 4C. Knockdown of survivin resulted in a significantly increased cytotoxicity in response to ABT-737, suggesting the inhibition of survivin was sufficient to potentiate the cancer cells to ABT-737. To further confirm the role of survivin in the sensitization, we tested effect of YM155 [21], a chemical inhibitor of survivin, on ABT-737-induced apoptosis in HepG2 cells and the results demonstrated that the inhibitor significantly sensitized the cancer cells to ABT-737-induced apoptosis (cleaved PARP) ( Figure 3D), Together, these results clearly indicated that down-regulation of survivin contributed to the sensitization effect of GCM. Together, these results clearly indicated that down-regulation of survivin contributed to the sensitization effect of GCM.
Down-Regulation of Survivin Is Attributed to the Inactivation of TOPK
Our previous study has shown that GCM can directly bind to oncogenic kinase T-LAK cell-originated protein kinase (TOPK) and inhibit its kinase activity [17]. We questioned whether the down-regulation of survivin by GCM was due to its ability to inactivate TOPK. As shown in Figure 4A, ABT-737 increased level of TOPK phosphorylation, which was decreased by GCM in HepG2 cells. The changes of TOPK phosphorylation was well correlated with phosphorylation level of survivin. To validate these in vitro findings, we analyzed the changes of TOPK and survivin in the tumor samples. As shown in Figure 4B, treatment with ABT-737 led to an increased phosphorylation level of H3, a known substrate of TOPK, and survivin, whereas exposure to GCM abolished the increased phosphorylation of H3 and survivin. To determine the relationship between TOPK and
Down-Regulation of Survivin Is Attributed to the Inactivation of TOPK
Our previous study has shown that GCM can directly bind to oncogenic kinase T-LAK cell-originated protein kinase (TOPK) and inhibit its kinase activity [17]. We questioned whether the down-regulation of survivin by GCM was due to its ability to inactivate TOPK. As shown in Figure 4A, ABT-737 increased level of TOPK phosphorylation, which was decreased by GCM in HepG2 cells. The changes of TOPK phosphorylation was well correlated with phosphorylation level of survivin. To validate these in vitro findings, we analyzed the changes of TOPK and survivin in the tumor samples. As shown in Figure 4B, treatment with ABT-737 led to an increased phosphorylation level of H3, a known substrate of TOPK, and survivin, whereas exposure to GCM abolished the increased phosphorylation of H3 and survivin. To determine the relationship between TOPK and survivin, we examined effect of TOPK inhibition by RNAi on the expression of survivin. As shown in Figure 4C, knockdown of TOPK resulted in decreased total and phosphorylated survivin, suggesting a role of TOPK in the regulation of survivin. Moreover, under condition of the inactivation of TOPK, HepG2 cells were more sensitive to ABT-737-induced apoptosis ( Figure 4D). Together, these results suggested that the sensitization effect of GCM was attributed to its ability to suppress ABT-737-induced TOPK-survivin pro-survival signaling.
Nutrients 2018, 10, x FOR PEER REVIEW 8 of 15 survivin, we examined effect of TOPK inhibition by RNAi on the expression of survivin. As shown in Figure 4C, knockdown of TOPK resulted in decreased total and phosphorylated survivin, suggesting a role of TOPK in the regulation of survivin. Moreover, under condition of the inactivation of TOPK, HepG2 cells were more sensitive to ABT-737-induced apoptosis ( Figure 4D). Together, these results suggested that the sensitization effect of GCM was attributed to its ability to suppress ABT-737-induced TOPK-survivin pro-survival signaling.
Inhibition of De Novo Lipogenesis Contributes to the Sensitization Effect of GCM In Vitro and In Vivo
Enhanced de novo fatty acid synthesis is a metabolic hallmark of cancer [22]. Fatty acid synthesis inhibition by targeting key lipogenic enzymes has been recognized as a promising cancer therapeutic approach [23]. Our previous study has shown that GCM can inhibit lipid accumulation in a model of non-alcoholic fatty liver disease [24]. We then asked whether the inhibition of lipogenesis by GCM contributed to its sensitization effect. HepG2 cells were treated with ABT-737 and/or GCM for the indicated times and changes of the key regulators of lipogenesis AMPK and its substrate acetyl-CoA carboxylase (ACC) were assessed to determine if GCM can inhibit fatty acid synthesis in the present setting. As shown in Figure 5A, the phosphorylation levels of AMPK and ACC (inhibitory phosphorylation) were increased by either GCM alone or the combination. Moreover, these changes were also found in the tumor samples ( Figure 5B), suggesting the key lipogenic enzyme activity of ACC was inhibited in the presence of GCM. To determine the contribution of ACC inhibition to the sensitization effect, we tested the influences of orlistat, a known ACC inhibitor, on cell death induction by ABT-737 in cell culture model. As shown in Figure 5C, the cell death induction by ABT-737 was significantly increased in the presence of orlistat. We further confirmed these in vitro results in HepG2 xenograft model. The dose of orlistat (100 mg/kg) we used for the in vivo study was determined based on the dose-finding experiment. As shown in Figure 5D-F, either ABT-737 or orlistat alone significantly inhibited tumor growth, leading to reduction of the final tumor weight by approximately 20%. As expected, the combination caused a further enhanced tumor growth inhibition ( Figure 5D), resulted in decrease of final tumor weight by 59.8% ( Figure 5E,F) without affecting bodyweight ( Figure 5G). These results suggested that the inhibition of ACC by GCM may contribute to its sensitization effect on ABT-737.
Inhibition of De Novo Lipogenesis Contributes to the Sensitization Effect of GCM In Vitro and In Vivo
Enhanced de novo fatty acid synthesis is a metabolic hallmark of cancer [22]. Fatty acid synthesis inhibition by targeting key lipogenic enzymes has been recognized as a promising cancer therapeutic approach [23]. Our previous study has shown that GCM can inhibit lipid accumulation in a model of non-alcoholic fatty liver disease [24]. We then asked whether the inhibition of lipogenesis by GCM contributed to its sensitization effect. HepG2 cells were treated with ABT-737 and/or GCM for the indicated times and changes of the key regulators of lipogenesis AMPK and its substrate acetyl-CoA carboxylase (ACC) were assessed to determine if GCM can inhibit fatty acid synthesis in the present setting. As shown in Figure 5A, the phosphorylation levels of AMPK and ACC (inhibitory phosphorylation) were increased by either GCM alone or the combination. Moreover, these changes were also found in the tumor samples ( Figure 5B), suggesting the key lipogenic enzyme activity of ACC was inhibited in the presence of GCM. To determine the contribution of ACC inhibition to the sensitization effect, we tested the influences of orlistat, a known ACC inhibitor, on cell death induction by ABT-737 in cell culture model. As shown in Figure 5C, the cell death induction by ABT-737 was significantly increased in the presence of orlistat. We further confirmed these in vitro results in HepG2 xenograft model. The dose of orlistat (100 mg/kg) we used for the in vivo study was determined based on the dose-finding experiment. As shown in Figure 5D-F, either ABT-737 or orlistat alone significantly inhibited tumor growth, leading to reduction of the final tumor weight by approximately 20%. As expected, the combination caused a further enhanced tumor growth inhibition ( Figure 5D), resulted in decrease of final tumor weight by 59.8% ( Figure 5E,F) without affecting bodyweight ( Figure 5G). These results suggested that the inhibition of ACC by GCM may contribute to its sensitization effect on ABT-737.
TOPK-Survivin Axis and AMPK-ACC Axis Are Involved Separately in the Sensitization Effect of GCM on ABT-737
The above data indicated that both TOPK-survivin and AMPK-ACC axis contributed to the sensitization effect of GCM. We next asked whether a crosstalk exists between these two pathways. The cells were treated with orlistat (20, 40 µM) for the indicated time and the changes of phosphorylation levels of ACC, TOPK and survivin were analyzed by western blotting. As shown in Figure 6A, treatment with orlistat led to increased phospho-ACC without affecting the phosphorylation status of TOPK/survivin. These in vitro results were consistent with that found in tumor tissues ( Figure 6B). The data suggested that TOPK/survivin axis was unlikely the downstream target of ACC. Conversely, TOPK was silenced by RNAi and the changes of AMPK-ACC axis were examined. As shown in Figure 6C, inhibition of TOPK did not affect the phosphorylation of AMPK and ACC. Together, these data indicated that these two axes were separate to contribute to the sensitization effect of GCM. Accordingly, simultaneous inhibition of these two axes resulted in a stronger enhanced effect compared to the suppression of each axis alone ( Figure 6D).
TOPK-Survivin Axis and AMPK-ACC Axis Are Involved Separately in the Sensitization Effect of GCM on ABT-737
The above data indicated that both TOPK-survivin and AMPK-ACC axis contributed to the sensitization effect of GCM. We next asked whether a crosstalk exists between these two pathways. The cells were treated with orlistat (20, 40 µM) for the indicated time and the changes of phosphorylation levels of ACC, TOPK and survivin were analyzed by western blotting. As shown in Figure 6A, treatment with orlistat led to increased phospho-ACC without affecting the phosphorylation status of TOPK/survivin. These in vitro results were consistent with that found in tumor tissues ( Figure 6B). The data suggested that TOPK/survivin axis was unlikely the downstream target of ACC. Conversely, TOPK was silenced by RNAi and the changes of AMPK-ACC axis were examined. As shown in Figure 6C, inhibition of TOPK did not affect the phosphorylation of AMPK and ACC. Together, these data indicated that these two axes were separate to contribute to the sensitization effect of GCM. Accordingly, simultaneous inhibition of these two axes resulted in a stronger enhanced effect compared to the suppression of each axis alone ( Figure 6D).
Discussion
BH3 mimetics ABT-737 and ABT-263 are representative of molecular targeted therapeutic agents with promising therapeutic efficacy. However, the drug resistance and dose-limiting toxicity pose major challenges for their clinical use [15]. The findings of the present study demonstrated that combining ABT-737 with GCM not only improved the efficacy, but also ameliorated the toxicity. Our findings therefore provided a possible new option for promoting utilization of BH3 mimetics.
Mcl-1-mediated apoptotic resistance is suggested to be the key cause of cancer cells refractory to ABT-737 [20]. Indeed, several studies have shown that targeting Mcl-1 by either chemical agents or genetic approaches resulted in a significant enhanced anti-cancer activity of ABT-737 in both in vitro and in vivo models [9,25,26]. To decipher the mechanisms involved in sensitization effect of GCM on ABT-737, we measured the influences of GCM on Mcl-1 expression and inhibitory effect of GCM on Mcl-1 was not found in the present study, ruling out the possibility of Mcl-1 as a target for GCM to exert its potentiation effect. Interestingly, we found that survivin, a key inhibitor of apoptosis protein (IAP) [27], was induced in response to ABT-737 in both in vitro and in vivo models, and this induction was abolished by combining ABT-737 with GCM. We further revealed that ABT-737-induced survivin was attributed to its ability to activate oncogenic kinase TOPK, whereas inhibition of survivin by GCM was due to its directly inactivating TOPK [17]. Moreover, the functional role of activation of TOPK-survivin axis in the regulation of apoptotic effect of ABT-737 was critically determined by the experiments that silencing TOPK or survivin was sufficient to
Discussion
BH3 mimetics ABT-737 and ABT-263 are representative of molecular targeted therapeutic agents with promising therapeutic efficacy. However, the drug resistance and dose-limiting toxicity pose major challenges for their clinical use [15]. The findings of the present study demonstrated that combining ABT-737 with GCM not only improved the efficacy, but also ameliorated the toxicity. Our findings therefore provided a possible new option for promoting utilization of BH3 mimetics.
Mcl-1-mediated apoptotic resistance is suggested to be the key cause of cancer cells refractory to ABT-737 [20]. Indeed, several studies have shown that targeting Mcl-1 by either chemical agents or genetic approaches resulted in a significant enhanced anti-cancer activity of ABT-737 in both in vitro and in vivo models [9,25,26]. To decipher the mechanisms involved in sensitization effect of GCM on ABT-737, we measured the influences of GCM on Mcl-1 expression and inhibitory effect of GCM on Mcl-1 was not found in the present study, ruling out the possibility of Mcl-1 as a target for GCM to exert its potentiation effect. Interestingly, we found that survivin, a key inhibitor of apoptosis protein (IAP) [27], was induced in response to ABT-737 in both in vitro and in vivo models, and this induction was abolished by combining ABT-737 with GCM. We further revealed that ABT-737-induced survivin was attributed to its ability to activate oncogenic kinase TOPK, whereas inhibition of survivin by GCM was due to its directly inactivating TOPK [17]. Moreover, the functional role of activation of TOPK-survivin axis in the regulation of apoptotic effect of ABT-737 was critically determined by the experiments that silencing TOPK or survivin was sufficient to sensitize the cancer cells to ABT-737. Our present study therefore identified induction of TOPK-survivin as a novel mechanism that contributed to cancer cells resistant to ABT-737 and targeting this axis may represent a novel approach to augment therapeutic efficacy of ABT-737. In addition, we provided the evidence that survivin was a novel downstream target of TOPK that may contribute to the tumorigenic activity of TOPK. Whether survivin is a direct substrate of TOPK is being investigated.
Tumor growth requires continuous biosynthesis including de novo fatty acid synthesis. Enhanced de novo lipogenesis is the third metabolic feature of cancer in addition to alterations in glucose and glutamine metabolisms [22]. Among several lipogenic genes, ACC is one of the rate-limiting enzymes that control the lipogenic process [28], and elevated expression of ACC has been found in a variety of cancer types including liver cancer [22,29]. Therefore, ACC has been recognized to be a novel target for cancer treatment. Indeed, several recent studies have demonstrated that inhibition of ACC-mediated lipogenesis by either genetic or pharmacological approach offered a promising anti-cancer efficacy against a wide range of cancer in pre-clinical models [22,29,30]. Moreover, suppression of ACC can increase the sensitivity of cancer cells to certain therapeutic drugs [22,29]. However, whether inhibition of ACC-mediated lipogenesis can improve the efficacy of BH3 mimetics such as ABT-737 has not been addressed. In the present study, we investigated anti-liver cancer effect of combining ABT-737 with orlistat using both in vitro and in vivo models and results demonstrated that a strong enhanced anti-cancer effect was achieved by the combination compared with that of each alone, supporting that targeting de novo fatty acid synthesis is an effective approach to potentiate cancer cells to ABT-737. These results supported inhibition of lipogenesis as an additional mechanism contributed to the sensitization effect of GCM in addition to its inhibitory effect on TOPK-survivin axis. However, a recent study by Nelson et al. shows that inhibition of hepatic lipogenesis by liver specific-knockout of ACC leads to an increased liver tumor incidence in comparison with the control in a chemical-induced carcinogenesis model [31], supporting a protective role of ACC-mediated lipogenesis in tumorigenesis. This controversial role of ACC-mediated lipogenesis in carcinogenesis maybe associated with the stages of tumorigenesis. We speculated that lipogenesis prevents normal cell transformation or cancer initiation by regulating oxidative stress but promotes tumor progression in the late stage by fueling the cancer cells. This hypothesis clearly needs to be tested in the future study.
Bcl-xl is indispensable for platelet survival [32], thus, it is not surprising that BH3 mimetics induce platelet suppression, which poses a major hurdle for their clinical use. In the present study, we demonstrated that the combination resulted in attenuated toxicity on platelets suppression, providing an additional valuable attribute for GCM as a combinatory agent with ABT-737. The detailed mechanisms underlying the protective effect of GCM on platelet is being investigated.
Our previous study has shown that GCM inhibits palmitate-induced lipoapoptosis in a number of liver cells [33]. In the present study, we found that GCM can sensitize liver cancer cells but not non-malignant liver cells to ABT-737-induced apoptosis. These data suggest that GCM can either exert pro-survival or pro-death activity depending on the context. The determinants that govern the decision between the pro-survival or pro-death activity of GCM might be associated with the signaling pathways activated by the stimuli. For examples, palmitate induced lipoapoptosis via inactivating autophagy and inducing ER stress, whereas GCM offered the protection through activating autophagy and inhibiting ER stress. ABT-737 induced cell death by targeting anti-apoptotic Bcl-2 family proteins, whereas GCM potentiated the cancer cells to ABT-737 through suppressing TOPK/survivin axis and inhibiting de novo lipogenesis.
Conclusions
GCM can potentiate liver cancer cells to ABT-737-induced apoptosis in both cell culture and animal models by targeting TOPK-survivin pro-survival signaling pathway and inhibiting ACC-mediated de novo lipogenesis (Figure 7). The findings of the present study implicate that bioactive coumarin compound GCM holds great potential to be used as a novel chemo-potentiating agent for improvement of ABT-737-based therapy. | 6,759.2 | 2018-03-01T00:00:00.000 | [
"Biology"
] |
Recent Development in Phosphonic Acid-Based Organic Coatings on Aluminum
: Research on corrosion protection of aluminum has intensi(cid:2)ed over the past decades due to environmental concerns regarding chromate-based conversion coatings and also the higher material performance requirements in automotive and aviation industries. Phosphonic acid-based organic and organic-inorganic coatings are increasingly investigated as potential replacements of toxic and inef(cid:2)cient surface treatments for aluminum. In this review, we have brie(cid:3)y summarized recent work (since 2000) on pretreatments or coatings based on various phosphonic acids for aluminum and its alloys. Surface characterization methods, the mechanism of bonding of phosphonic acids to aluminum surface, methods for accessing the corrosion behavior of the treated aluminum, and applications have been discussed. There is a clear trend to develop multifunctional phosphonic acids and to produce hybrid organic-inorganic coatings. In most cases, the phosphonic acids are either assembled as a monolayer on the aluminum or incorporated in a coating matrix on top of aluminum, which is either organic or organic-inorganic in nature. Increased corrosion protection has often been observed. However, much work is still needed in terms of their ecological impact and adaptation to the industrially-feasible process for possible commercial exploitation. min before the measurements. For these measurements, as mentioned before, the more positive E corr and the lower the current values, the better the corrosion resistance. PTMOS + PPA showed the highest E corr value and a low current. In contrast, the TEOS ‐ based film showed a negative E corr value, which indicated the poor corrosion resistance. Even addition of TMA in TEOS film accelerated the corrosion process. Potentiodynamic polarization measurements of samples for electrodeposition process showed slight enhancement of corrosion inhibition. Elemental analysis was used to detected phosphorus (due to PPA) in the films.
Introduction
Aluminum (Al) and its alloys have been widely used in engineering applications because of their higher strength to weight ratio, ductility, formability, and lower costs. In many applications, such as in aircraft, automobiles, and structural parts in buildings, its corrosion resistance becomes very important. Aluminum, by itself, is relatively stable to corrosion because it readily oxidizes to form a passive protective oxide layer on its surface which is robust and does not simply flake off. However, the galvanic corrosion of aluminum is very critical and is complicated in various alloys and under acidic pH. An excellent review on corrosion protection of aluminum prior to the year 2000 can be found in the literature [1]. The need to develop chromate-free treatments for aluminum is becoming increasingly important. Many organic and inorganic protective coatings are being investigated by researchers with the potential advantages and shortcomings [1]. Organic-inorganic conversion coatings are quite attractive due to the simplicity of their application and possible favorable interaction with organic layers in hybrid materials, which offer enhanced corrosion resistance and mechanical performance [2].
A variety of phosphonic acids is commonly used to modify the surfaces of metals and their oxides for their corrosion protection, stabilization as nano-particles, adhesion improvement to organic layers, hydrophobization, hydrophilization, etc. They are excellent chelating agents and bind very strongly to metals resulting in different 1D to 3D metal organic frameworks (MOFs), also called metal phosphonates [3,4]. They form hydrolytically-stable bonds with the metals and provide excellent coverage compared to many thiols, silanes, and carboxylic acids. A phosphonate corrosion inhibitor adsorbs well on the metal surface, reducing its solubility in aqueous media and, thus, decreases the area of active metal surface and increases the activation energy to hydrolysis. Such phosphonic acids can be a simple molecule where the phosphonic acid is attached to an alkyl or aromatic group. Self-assembly of such phosphonic acids on aluminum surface is well studied where the binding of these molecules to aluminum can be monodentate, bidentate, or tridentate [5][6][7]. More recently there is an increased impetus to develop functional phosphonic acids, which can not only bind to the metal surface but also provide linkage to an organic matrix in multicomponent systems [8,9]. In some cases, phosphonic acid acts as a linker between aluminum and a hydrophobic polymerizable group (pyrrole) [10] or chemically-stable protective material (graphene oxide) [11]. Such molecules are applied on the surface of aluminum via dip coatings, spray from aqueous solutions, or are incorporated in a polymeric matrix (adhesives, glues, or paints) which is then coated on the metal surface. A novel ultrasonic assisted deposition (USAD) method has also been developed to coat phosphate films on aluminum [12]. This application procedure is believed to improve the interaction of the phosphonic acid to the aluminum surface.
Functional phosphonic acids with hydrophobic aliphatic groups or fluorinated groups can increase the hydrophobicity of the metal surface, thereby acting as barrier to aqueous solutions and improving its corrosion protection. In some cases the functional groups of phosphonic acids react with organic layers (adhesives) and provide a stable barrier against aqueous environments.
The following sections first briefly summarize the methods for characterization of phosphonic acids on aluminum surfaces and several techniques used to evaluate the corrosion behavior, in order to understand the following chapter, which deals with the application of various types of phosphonic acids on aluminum, together with their physico-chemical behavior relevant to corrosion protection and adhesion to organic layers. Thereby, a distinction has been made between the aluminum surface treatment with phosphonic acids or phosphates as a pre-coating and surface coatings dissolved in paints that contain phosphonic acids.
Characterization of the Phosphonic Acid-Modified Aluminum Surfaces
Self-assembled monolayers (SAM) are ordered molecular assemblies and are commonly used to modify aluminum surfaces. They are spontaneously formed by the adsorption of molecules with head groups that show affinity to a specific substrate [13]. Phosphonic acids form SAMs on aluminum surfaces by condensation reactions of the acid functional group with basic surface-bound alumino-hydroxyl species. Thereby, the phosphonic acid head group shows a strong affinity to the aluminum substrate, creating an alumino-phosphonate linkage via a strong Al-O-P chemical bond: R-PO(OH) 2 + Al-OH→R-(OH)OP-O-Al + H 2 O [5][6][7].
To investigate a successful formation of the organophosphorus layers, including information about the properties of the films (surface morphology, structural ordering, density and uniformity, binding mode to the aluminum substrate, as well as stability), different analytical methods have been applied in the literature. A detailed characterization of the layers is necessary to understand their intended function (for instance corrosion protection, adhesion promotion, (non)-wettability, surface passivation). Table 1 summarizes the various analytical techniques that have been used in the last 20 years, or so, to study the organization and chemistry of organophosphorus compounds on an aluminum substrate. Various characterization techniques have been categorized by us into three groups: Surface morphology; presence, composition, and stability of layers; orientation of molecules and binding mode to aluminum, and are all briefly explained in the remainder of this section.
Surface Morphology
Microscopy measurements, i.e., optical microscopy, scanning electron microscopy (SEM) and atomic force microscopy (AFM), were used to investigate the surface morphology of the organophosphorus self-assembled monolayers onto the aluminum substrate. Physical characteristics, like uniformity and homogeneity of the surface phosphatization, were obtained by comparing the coated with the bare aluminum substrate using SEM [14]. AFM has been used to obtain information about the surface smoothness [8,15].
Presence, Composition, and Stability of Layer
A variety of surface sensitive analytical techniques has been used to investigate the presence of the layer and its composition. X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES) are both methods which determine the elemental composition in the first couple of nanometers of a surface and, hence, were often applied to verify the presence of phosphorus, indicating a successful organophosphorus coating chemically-bound to aluminum [6,8,11,16,17,[19][20][21][22]. As a general preparative step, the samples were usually rinsed/washed after the coating process to remove the residual chemicals and physically adsorbed organophosphorus compounds. In addition to the determination of the presence of the organophosphorus coating, XPS was also used to study the concentration of the organic molecules on the aluminum surface [18]. The experimental phosphorus to aluminum concentration ratio allowed the quantification of the organophosphorus molecule surface concentration and modelling of the XPS data helped to determine the thickness of the monolayer [18]. Thereby, the ratio of signals from the coating and from the underlying aluminum substrate was recorded, from which the coating thickness can be derived. Depth profiling via argon ion sputtering in combination with XPS was also used to determine the coating thickness (especially for thicker layers). The comparison between experimental and theoretical values for the carbon-to-phosphorus ratio was used as proof that chemically-uniform organophosphorus films formed on aluminum [7]. The recording of characteristic binding energies allowed to follow the individual steps in the decomposition of the phosphonates on the aluminum surface [20]. Table 2 summarizes XPS binding energies found in the literature for organophosphorus layers on metal surfaces.
Another analytical technique often used to study the presence and composition of organophosphorus layers on aluminum is reflectance Fourier-transform infrared spectroscopy (FTIR), often measured at a grazing angle to increase the surface sensitivity. FTIR was applied to investigate the adsorption of the coating by detecting specific vibrational modes involving phosphorus bonds [8,11,14,16,17,23,24]. Table 3 contains a summary of frequencies of vibrational modes involving the phosphorus atom found in the literature for self-assembled organophosphorus monolayers on aluminum. Time-of-flight secondary ion mass spectrometry (ToF-SIMS), as a complementary technique to XPS, has also been used to demonstrate the presence of the organophosphorus layer, thereby investigating selective functionalization on aluminum and glass regions of a surface by mapping analysis [6]. In addition, surface plasmon resonance spectroscopy (SPR) was applied to examine the kinetics of the adsorption process [27]. Thereby, it was found that the adsorption process of phosphonic acids on aluminum starts very quickly and reaches a plateau after some minutes.
Water contact angle (WCA) measurements were used to investigate the hydrophilicity and hydrophobicity of the surface. A change compared to a reference untreated aluminum surface indicated the successful formation of an organophosphorus layer. In addition, WCA measurements were also applied to study the orientation of the molecules (see also below) because in well-ordered monolayers, the access of the water drop to the aluminum surface is reduced [7]. From the investigation of the temporal behavior of WCA, information about surface coverage and chain ordering was obtained [5]. Finally, the stability of the adsorbed layers was determined by recording advancing and receding contact angles by repeated cycling [10,26].
Orientation of Molecules and Binding Mode to Aluminum
In order to characterize the binding (orientation and mode) of the organophosphorus molecules to an aluminum surface, mostly three methods have been used in the literature: Infrared reflection absorption spectroscopy (IRRAS) at grazing angle (often applying polarized infrared radiation), X-ray photoelectron spectroscopy (XPS) at a fixed electron take-off angle as well as angle-resolved (ARXPS) and solid-state 31 P nuclear magnetic resonance (NMR) spectroscopy. The most expected orientation with the phosphonic acid group reacting with the surface hydroxyl groups of the aluminum substrate and the hydrocarbon tail and/or terminal functional groups on top was proven by angle-resolved XPS. The carbon (from the tail) to phosphorus (from the head group) intensity ratio was determined at low and high electron take-off angles, representing two different information depths [7,10,27,28]. In addition, using infrared reflection absorption spectroscopy (IRRAS) and comparing the intensities of vibrations polarized perpendicular or parallel to the hydrocarbon backbone, information about the orientation of the organophosphorus molecule was also obtained [5].
The phosphonic group can bind to the aluminum atom via a direct P-O-Al bond in different modes, i.e., mono-, bi-, or tridentate, depending on whether one, two or all three oxygen atoms from the phosphonic group are involved, respectively. It has been found that both the deposition mode (for instance stirring, sonication), as well as the structure of the phosphonates, play a role [12]. Additionally, the evolution from tridentate to lower binding modes as film formation proceeds (the increase of coverage can change the chain ordering) was observed in the literature [5]. These behaviors might explain the variety of binding modes reported in the literature for organophosphorus layers on aluminum. Table 3 summarizes the frequencies of infrared vibrational modes found in the literature for self-assembled organophosphorus layers on aluminum. By observing which vibrational bands are present and which are not, certain binding modes have been assigned (mostly bidentate [16,29] and tridentate [12,32,33], and some monodentate [30]). For instance, a missing P-OH stretching mode together with the presence of PO 3 2− stretching modes (deprotonation of the phosphonic acid group) in the IR spectrum indicates the formation of a bidentate binding. Mixtures of binding modes have also been found [5,30,31]. As a second technique to investigate the binding mode of organophosphorus molecules to aluminum, X-ray photoelectron spectroscopy (XPS) was used. Table 2 summarizes the XPS binding energies found in the literature for self-assembled organophosphorus layers on aluminum. As can be seen from Table 2, due to overlapping bands, neither the aluminum nor the phosphorus signal allows to clearly differentiate between various binding modes of phosphorus to aluminum. It has to be noted that a trend in the experimental binding energy to increase for the structure [PO n (OR) m ] y− is observed as the ratio of n:m of "free" O ligands (n) to covalently bound OR ligands (m) is stepwise changed from 4:0 to 3:1 to 2:2 and to 1:3. Therefore, also the XPS phosphorus binding energy has been used to assign the binding mode (i.e., mono-, bi-, or tridentate) [34]. However, a final conclusion cannot be drawn just from the P 2p signal and further experimental evidence is needed. The O 1s signal is of particular interest regarding the assessment of the type of bond between organophosphorus and the aluminum substrate. As the binding energy for P=O and P-O-Metal is different from the one for P-OH and P-OR (see Table 2), the ratio of the intensities at these two binding energies was used to determine the binding mode. In agreement with other studies using IR (see above), a mixed monodentate and bidentate mode was found [34]. As a third method, the linewidth and shifts of peaks in 31 P NMR spectra have been used to study the bonds between the aluminum and the oxygen atoms of the organophosphorus molecule and to assign a specific binding mode by comparison with literature data [12,31].
Corrosion Evaluation Methods
The corrosion resistance of the coated aluminum surface is evaluated mainly by four different ways: Electrochemical measurements, conventional measurements, spectral analysis and surface analysis, and are briefly summarized in Table 4. In most cases treated aluminum substrates are coated with an epoxy layer to study its adhesion and corrosion performance. The corrosion resistance of aluminum substrates is highly related to the microstructure, chemical composition, electrochemical behavior of the coating, and to the interface between coating/metal. Therefore, the measurements mentioned in Table 4 are mostly comprehensive and are used to gather more information in order to analyze the corrosion behavior from different perspectives. [43] Spectral analysis Infrared spectroscopy (IR) [12] Wettability Contact angle measurements [10,19]
Electrochemical Measurements
Electrochemical changes occurring during the coating failure can be detected via an electrical signal which could provide information of metal corrosion and coating property alterations. Quantitative and semi-quantitative evaluation of coatings could be achieved by electrochemical measurements, such as EIS, ENM, and hydrogen permeation current method.
Electrochemical Impedance Spectroscopy (EIS)
EIS is a non-destructive measurement, which can provide time dependent surface information in a corrosive medium. A small amplitude sinusoidal alternating signal was added in the coating/primer/metal system. Through analyzing the impedance spectrum and the admittance spectrum, an equivalent circuit model, shown in Figure 1, was built to evaluate electrochemical information of the coating system. During the electrolyte penetration through the coating, C coat increases and R coat decreases. It is confirmed that a surface with an impedance at the low frequency below 10 7 Ω·cm 2 is considered a poor protective barrier [44].
EIS is a nondestructive method and provides fast detection and comprehensive information on coating corrosive behavior. The G106-89 standard has been developed for EIS measurement. However, EIS only provides general information on corrosion behavior of the surface, which means it cannot help understand the mechanism of the corrosion start point [45].
Electrochemical Measurements
Electrochemical changes occurring during the coating failure can be detected via an electrical signal which could provide information of metal corrosion and coating property alterations. Quantitative and semi-quantitative evaluation of coatings could be achieved by electrochemical measurements, such as EIS, ENM, and hydrogen permeation current method.
Electrochemical Impedance Spectroscopy (EIS)
EIS is a non-destructive measurement, which can provide time dependent surface information in a corrosive medium. A small amplitude sinusoidal alternating signal was added in the coating/primer/metal system. Through analyzing the impedance spectrum and the admittance spectrum, an equivalent circuit model, shown in Figure 1, was built to evaluate electrochemical information of the coating system. During the electrolyte penetration through the coating, Ccoat increases and Rcoat decreases. It is confirmed that a surface with an impedance at the low frequency below 10 7 Ω•cm 2 is considered a poor protective barrier [44].
EIS is a nondestructive method and provides fast detection and comprehensive information on coating corrosive behavior. The G106-89 standard has been developed for EIS measurement. However, EIS only provides general information on corrosion behavior of the surface, which means it cannot help understand the mechanism of the corrosion start point [45].
Electrochemical Noise Method (ENM)
Electrochemical noise can be described as naturally-occurring fluctuations in potential and current around a mean value in the electrochemical cell [46]. The parameters voltage noise (σv) and current noise (σi) can be derived from the fluctuations. The noise resistance Rn, which can be calculated by the Ohms Law, is used to evaluate the corrosion resistance (see Equation (1)): Figure 1. An equivalent circuit model for epoxy coated aluminum. R oxide is the resistance of oxide aluminum layer, C oxide is the capacitance of oxide aluminum layer, R coat is the resistance of the coating, and C coat is the capacitance of the coating. (Reproduced from [41] with permission; Copyright 2004 Elsevier).
Electrochemical Noise Method (ENM)
Electrochemical noise can be described as naturally-occurring fluctuations in potential and current around a mean value in the electrochemical cell [46]. The parameters voltage noise (σ v ) and current noise (σ i ) can be derived from the fluctuations. The noise resistance R n , which can be calculated by the Ohms Law, is used to evaluate the corrosion resistance (see Equation (1)): The higher the value of R n is, the better the corrosion resistance of the coating [22]. When the value of noise resistance R n is less than 10 6 Ω·cm 2 , the coating shows poor corrosion resistance. When R n is more than 10 8 Ω·cm 2 , the coating exhibits good corrosion protection. The value between 10 6 and 10 8 Ω·cm 2 indicates an intermediate level of corrosion resistance [47].
ENM is an electrically non-intrusive sensitive method, which requires only a few minutes for a single measurement [48]. Recently, it has become the main measurement method for determining the metal corrosion rate, the localized corrosion process, and is widely used in industry for corrosion detection.
Potentiodynamic Polarization Measurements
Potentiodynamic anodic polarization can characterize the metal corrosion behavior by its current potential relationship. It can be used to determine the function and the type of inhibitor. The sample potential is scanned slowly in the positive direction, which means it forms an oxide coating during the test. The passivation tendencies and inhibitors influence can be easily studied by this method. Meanwhile, the corrosion behavior of different coatings or metals can be compared on a rational basis, and it can be used as a pretest to give a suitable corrosive condition range for further long-term measurements. The general corrosion behavior of coated substrates can be evaluated by the corrosion potential E corr . Specifically, the more positive E corr and the lower the current values, the better the corrosion resistance [22].
Localized Electrochemical Techniques
For many corrosion phenomena, high resolution instead of average data about the behavior of an electrochemically-active surface is required (for instance to differentiate small local anodes and cathodes on a metal surface) [42]. A scanning Kelvin probe (SKP), as a non-contact and non-destructive method, allows measuring and mapping the local potential and current difference on the microscale. The scanning vibrating electrode technique (SVET) enables to spatially characterize corrosion activity by measuring the potential gradients in the electrolyte due the presence of anodic and cathodic areas on a metal surface. These localized techniques find applications in different corrosion investigations like effect of microstructure and finish on corrosion initiation, localized corrosion phenomena, and detection of electrochemically-active pin-hole defects in coatings.
Conventional Measurements
Conventional corrosion measurement methods are inexpensive and easy to perform. Different national and international standards have been developed which include immersion, salt spray, damp heat, gas corrosion, and filiform corrosion tests. They are commonly used in industry. However, they are not suitable for the corrosion kinetics study or the corrosion mechanism study as they provide qualitative results, usually involving long test cycles and poor repeatability.
Immersion Test
The coated substrates are directly immersed in the corrosive medium. After a certain exposure to corrosion medium, the damage or corrosion of the coating is observed to evaluate corrosion resistance. The immersion tests are widely used for different coatings in different corrosion media. The common immersion tests are water resistance test, salt water resistance test, acid resistance test, and various organic solvent resistance tests. One can find numerous national immersion tests specific to a country. Some international ASTM standards for immersion tests are B895-05, G44, and G110.
Salt Spray Test
The salt spray test is the most classic and widely-used method to evaluate the corrosion resistance of coatings. The first international salt spray standard, ASTM B117, was recognized in 1939. The coated substrates are scratched first and then the test is carried out in a closed test chamber, where salt water (5% NaCl solution, pH 6.5-7.2) is sprayed using pressurized air at 35 ± 2 • C. Later standards include ISO9227, JIS Z 2371, and ASTM G85. To modify the corrosion condition, acetic acid-salt spray (ASS), copper-accelerated acetic acid-salt spray (CASS), prohesion cycle test as well as various modified versions were established.
Spectral Analysis
The decomposition of a polymer in the coating is one of the main reason that causes coating failures and metal corrosion, which would lead to further deposition. The chemical changes of the polymer and the inorganic corrosion products can be quantitatively detected by, for example, FTIR, infrared microscopy, or laser Raman spectroscopy (LRS). They can be used to study the corrosion performance and corrosion mechanism because of the highly-precise positioning and quantification [45]. The detail about spectral analysis has been described in Sections 2.2 and 2.3.
Surface Analysis
The various properties of the coating are highly related to its microstructure, chemical composition, and its bonding condition at the interface of coating/metal. The surface changes during the corrosion process were characterized using the same analytical methods as already described in Section 2.1.
Aluminum Surface Treatments with Phosphonic Acids
The following sections describe the various types of phosphonic acids and some phosphates which have been used to modify the surface of aluminum with the main objective to improve its corrosion protection and/or adhesion promotion to organic layers. A list of phosphonic acids commonly used to improve corrosion protection of aluminum is shown in Table 5. Some of these phosphonic acids have simple structures (PPA or MSAP), others are more functional phosphonic acids with terminal groups like vinyl (VPA), amino (APP), and pyrrol (Cn-Ph-P). The salt spray test is the most classic and widely-used method to evaluate the corrosion resistance of coatings. The first international salt spray standard, ASTM B117, was recognized in 1939. The coated substrates are scratched first and then the test is carried out in a closed test chamber, where salt water (5% NaCl solution, pH 6.5-7.2) is sprayed using pressurized air at 35 ± 2 °C. Later standards include ISO9227, JIS Z 2371, and ASTM G85. To modify the corrosion condition, acetic acidsalt spray (ASS), copper-accelerated acetic acid-salt spray (CASS), prohesion cycle test as well as various modified versions were established.
Spectral Analysis
The decomposition of a polymer in the coating is one of the main reason that causes coating failures and metal corrosion, which would lead to further deposition. The chemical changes of the polymer and the inorganic corrosion products can be quantitatively detected by, for example, FTIR, infrared microscopy, or laser Raman spectroscopy (LRS). They can be used to study the corrosion performance and corrosion mechanism because of the highly-precise positioning and quantification [45]. The detail about spectral analysis has been described in Sections 2.2 and 2.3.
Surface Analysis
The various properties of the coating are highly related to its microstructure, chemical composition, and its bonding condition at the interface of coating/metal. The surface changes during the corrosion process were characterized using the same analytical methods as already described in Section 2.1.
Aluminum Surface Treatments with Phosphonic Acids
The following sections describe the various types of phosphonic acids and some phosphates which have been used to modify the surface of aluminum with the main objective to improve its corrosion protection and/or adhesion promotion to organic layers. A list of phosphonic acids commonly used to improve corrosion protection of aluminum is shown in Table 5. Some of these phosphonic acids have simple structures (PPA or MSAP), others are more functional phosphonic acids with terminal groups like vinyl (VPA), amino (APP), and pyrrol (Cn-Ph-P). Table 5. Chemical name, structure, and abbreviation of organophosphonic acids.
Salt Spray Test
The salt spray test is the most classic and widely-used method to evaluate the corrosion resistance of coatings. The first international salt spray standard, ASTM B117, was recognized in 1939. The coated substrates are scratched first and then the test is carried out in a closed test chamber, where salt water (5% NaCl solution, pH 6.5-7.2) is sprayed using pressurized air at 35 ± 2 °C. Later standards include ISO9227, JIS Z 2371, and ASTM G85. To modify the corrosion condition, acetic acidsalt spray (ASS), copper-accelerated acetic acid-salt spray (CASS), prohesion cycle test as well as various modified versions were established.
Spectral Analysis
The decomposition of a polymer in the coating is one of the main reason that causes coating failures and metal corrosion, which would lead to further deposition. The chemical changes of the polymer and the inorganic corrosion products can be quantitatively detected by, for example, FTIR, infrared microscopy, or laser Raman spectroscopy (LRS). They can be used to study the corrosion performance and corrosion mechanism because of the highly-precise positioning and quantification [45]. The detail about spectral analysis has been described in Sections 2.2 and 2.3.
Surface Analysis
The various properties of the coating are highly related to its microstructure, chemical composition, and its bonding condition at the interface of coating/metal. The surface changes during the corrosion process were characterized using the same analytical methods as already described in Section 2.1.
Aluminum Surface Treatments with Phosphonic Acids
The following sections describe the various types of phosphonic acids and some phosphates which have been used to modify the surface of aluminum with the main objective to improve its corrosion protection and/or adhesion promotion to organic layers. A list of phosphonic acids commonly used to improve corrosion protection of aluminum is shown in Table 5. Some of these phosphonic acids have simple structures (PPA or MSAP), others are more functional phosphonic acids with terminal groups like vinyl (VPA), amino (APP), and pyrrol (Cn-Ph-P). The salt spray test is the most classic and widely-used method to evaluate the corrosion resistance of coatings. The first international salt spray standard, ASTM B117, was recognized in 1939. The coated substrates are scratched first and then the test is carried out in a closed test chamber, where salt water (5% NaCl solution, pH 6.5-7.2) is sprayed using pressurized air at 35 ± 2 °C. Later standards include ISO9227, JIS Z 2371, and ASTM G85. To modify the corrosion condition, acetic acidsalt spray (ASS), copper-accelerated acetic acid-salt spray (CASS), prohesion cycle test as well as various modified versions were established.
Spectral Analysis
The decomposition of a polymer in the coating is one of the main reason that causes coating failures and metal corrosion, which would lead to further deposition. The chemical changes of the polymer and the inorganic corrosion products can be quantitatively detected by, for example, FTIR, infrared microscopy, or laser Raman spectroscopy (LRS). They can be used to study the corrosion performance and corrosion mechanism because of the highly-precise positioning and quantification [45]. The detail about spectral analysis has been described in Sections 2.2 and 2.3.
Surface Analysis
The various properties of the coating are highly related to its microstructure, chemical composition, and its bonding condition at the interface of coating/metal. The surface changes during the corrosion process were characterized using the same analytical methods as already described in Section 2.1.
Aluminum Surface Treatments with Phosphonic Acids
The following sections describe the various types of phosphonic acids and some phosphates which have been used to modify the surface of aluminum with the main objective to improve its corrosion protection and/or adhesion promotion to organic layers. A list of phosphonic acids commonly used to improve corrosion protection of aluminum is shown in Table 5. Some of these phosphonic acids have simple structures (PPA or MSAP), others are more functional phosphonic acids with terminal groups like vinyl (VPA), amino (APP), and pyrrol (Cn-Ph-P). The salt spray test is the most classic and widely-used method to evaluate the corrosion resistance of coatings. The first international salt spray standard, ASTM B117, was recognized in 1939. The coated substrates are scratched first and then the test is carried out in a closed test chamber, where salt water (5% NaCl solution, pH 6.5-7.2) is sprayed using pressurized air at 35 ± 2 °C. Later standards include ISO9227, JIS Z 2371, and ASTM G85. To modify the corrosion condition, acetic acidsalt spray (ASS), copper-accelerated acetic acid-salt spray (CASS), prohesion cycle test as well as various modified versions were established.
Spectral Analysis
The decomposition of a polymer in the coating is one of the main reason that causes coating failures and metal corrosion, which would lead to further deposition. The chemical changes of the polymer and the inorganic corrosion products can be quantitatively detected by, for example, FTIR, infrared microscopy, or laser Raman spectroscopy (LRS). They can be used to study the corrosion performance and corrosion mechanism because of the highly-precise positioning and quantification [45]. The detail about spectral analysis has been described in Sections 2.2 and 2.3.
Surface Analysis
The various properties of the coating are highly related to its microstructure, chemical composition, and its bonding condition at the interface of coating/metal. The surface changes during the corrosion process were characterized using the same analytical methods as already described in Section 2.1.
Phenylphosphonic Acid (PPA)
In an earlier work, it was found that a hydrophobic sol-gel film can provide good corrosion resistance to aluminum [50]. Subsequently, the same researchers incorporated organic phenyl phosphonic acid as an anion in a hydrophobic sol-gel film to further enhance the protection of aluminum against pitting corrosion. It was expected that such combinations of phosphonic acid and sol-gel coatings would be beneficial in corrosion protection. Various sol-gel coatings with two different anions were coated on the aluminum substrate. The formulations of such coatings used in this research are shown in Table 6. Table 6. Various formulations of sol-gel coatings. Aluminum rods embedded in a Teflon sheath (99.999%) and aluminum plates (5050-H24) were used as substrates and coated using various sol-gel formulations (Table 6) via dip-coating. The substrates were then submerged in 100 ppm NaCl solution at 25 °C for 20 min and subjected to ENM to study the initiation and propagation of corrosion at the interface of aluminum plates and coating. It is considered that the higher the value of Rn, the better the corrosion resistance of the coating. The results showed that PTMOS + PPA sol-gel coatings exhibited the best protection (Rn = 6.0 kΩ) compared to uncoated aluminum (Rn = 1.5 kΩ) and PTMOS coatings (Rn = 4.0 kΩ).
Sol-Gel Chemistry PTMOS PTMOS PTMOS + TMA PTMOS TEOS TEOS TEOS + TMA
In potentiodynamic polarization measurements, aluminum in the form of rods was used. These rods were exposed to 100 ppm NaCl solution for 30 min before the measurements. For these measurements, as mentioned before, the more positive Ecorr and the lower the current values, the better the corrosion resistance. PTMOS + PPA showed the highest Ecorr value and a low current. In contrast, the TEOS-based film showed a negative Ecorr value, which indicated the poor corrosion resistance. Even addition of TMA in TEOS film accelerated the corrosion process. Potentiodynamic polarization measurements of samples for electrodeposition process showed slight enhancement of corrosion inhibition. Elemental analysis was used to detected phosphorus (due to PPA) in the films.
Phenylphosphonic Acid (PPA)
In an earlier work, it was found that a hydrophobic sol-gel film can provide good corrosion resistance to aluminum [50]. Subsequently, the same researchers incorporated organic phenyl phosphonic acid as an anion in a hydrophobic sol-gel film to further enhance the protection of aluminum against pitting corrosion. It was expected that such combinations of phosphonic acid and sol-gel coatings would be beneficial in corrosion protection. Various sol-gel coatings with two different anions were coated on the aluminum substrate. The formulations of such coatings used in this research are shown in Table 6. Table 6. Various formulations of sol-gel coatings. Aluminum rods embedded in a Teflon sheath (99.999%) and aluminum plates (5050-H24) were used as substrates and coated using various sol-gel formulations (Table 6) via dip-coating. The substrates were then submerged in 100 ppm NaCl solution at 25 °C for 20 min and subjected to ENM to study the initiation and propagation of corrosion at the interface of aluminum plates and coating. It is considered that the higher the value of Rn, the better the corrosion resistance of the coating. The results showed that PTMOS + PPA sol-gel coatings exhibited the best protection (Rn = 6.0 kΩ) compared to uncoated aluminum (Rn = 1.5 kΩ) and PTMOS coatings (Rn = 4.0 kΩ).
Sol-Gel Chemistry PTMOS PTMOS PTMOS + TMA PTMOS TEOS TEOS TEOS + TMA
In potentiodynamic polarization measurements, aluminum in the form of rods was used. These rods were exposed to 100 ppm NaCl solution for 30 min before the measurements. For these measurements, as mentioned before, the more positive Ecorr and the lower the current values, the better the corrosion resistance. PTMOS + PPA showed the highest Ecorr value and a low current. In contrast, the TEOS-based film showed a negative Ecorr value, which indicated the poor corrosion resistance. Even addition of TMA in TEOS film accelerated the corrosion process. Potentiodynamic polarization measurements of samples for electrodeposition process showed slight enhancement of corrosion inhibition. Elemental analysis was used to detected phosphorus (due to PPA) in the films.
Phenylphosphonic Acid (PPA)
In an earlier work, it was found that a hydrophobic sol-gel film can provide good corrosion resistance to aluminum [50]. Subsequently, the same researchers incorporated organic phenyl phosphonic acid as an anion in a hydrophobic sol-gel film to further enhance the protection of aluminum against pitting corrosion. It was expected that such combinations of phosphonic acid and sol-gel coatings would be beneficial in corrosion protection. Various sol-gel coatings with two different anions were coated on the aluminum substrate. The formulations of such coatings used in this research are shown in Table 6. Aluminum rods embedded in a Teflon sheath (99.999%) and aluminum plates (5050-H24) were used as substrates and coated using various sol-gel formulations (Table 6) via dip-coating. The substrates were then submerged in 100 ppm NaCl solution at 25 • C for 20 min and subjected to ENM to study the initiation and propagation of corrosion at the interface of aluminum plates and coating. It is considered that the higher the value of R n , the better the corrosion resistance of the coating. The results showed that PTMOS + PPA sol-gel coatings exhibited the best protection (R n = 6.0 kΩ) compared to uncoated aluminum (R n = 1.5 kΩ) and PTMOS coatings (R n = 4.0 kΩ).
In potentiodynamic polarization measurements, aluminum in the form of rods was used. These rods were exposed to 100 ppm NaCl solution for 30 min before the measurements. For these measurements, as mentioned before, the more positive E corr and the lower the current values, the better the corrosion resistance. PTMOS + PPA showed the highest E corr value and a low current. In contrast, the TEOS-based film showed a negative E corr value, which indicated the poor corrosion resistance. Even addition of TMA in TEOS film accelerated the corrosion process. Potentiodynamic polarization measurements of samples for electrodeposition process showed slight enhancement of corrosion inhibition. Elemental analysis was used to detected phosphorus (due to PPA) in the films. For PTMOS based film, phosphorus was detected and it was hypothesized that the anion of PPA and phenyl group of PTMOS tends to form stable π-interactions in the sol-gel system [22].
Vinylphosphonic Acid (VPA)
Cooling of an aluminum alloy plate heat exchanger by seawater needs superior corrosion resistance. Trifluoroethylene polymers were preferred to form a fluorocarbon resin coating on aluminum alloy because of high adhesion to the organic phosphonic acid primer coating and high corrosion resistance. Vinylphosphonic acid (VPA) was used as the organic phosphonic acid primer because of its handle and its superior adhesion effect. The aluminum alloy (3003) was first anodically oxidized and then immersed in VPA aqueous solution (10 g/L) at 65 • C for 10 s or 120 s, followed by fluorocarbon resin coating and dried at 50 • C for 24 h. For the corrosion test, one hundred 1 mm 2 cross-cuts were formed on the coating, and a tape was attached on it and then peeled by the method specified in JIS K 5600-5-6. For samples with coatings without an organic phosphonic acid primer, all hundred cross-cuts of the coating were peeled off (0/100) upon tape peeling. However, for the coating with VPA (10 s or 120 s), all 100 cross-cuts remained unpeeled (100/100), which indicated an excellent anti-corrosion property [49].
1,12-Dodecyldiphosphonic Acid (DDP)
Self-assembled monolayers (SAMs) consisting of zirconium phosphate and their derivatives were used as the substitute to chromating on aluminum for ecological reasons [8]. Aluminum alloy (1100) was pretreated with 0.4 vol % of 3-aminopropyltriethoxysilane (APTES) in toluene by chemical vapor deposition. APTES is believed to prevent the formation of defects and provide protection against the harsh subsequent phosphonation treatment (pH ≤ 2). Subsequently, the substrate was immersed in 0.2 mol POCl 3 and γ-collidine of acetonitrile solution. After ultrasonically rinsing with acetonitrile, the substrate was dipped in 5 mmol/dm 3 zirconyl chloride ZrCl 2 ·8H 2 O of ethyl alcohol-aqueous solution to form a zirconium phosphate (ZrP) layer. Ultimately the substrate was suspended in 2 mmol/dm 3 DDP of acetone-aqueous solution to form the three layered Zr-DDP coating as shown in Scheme 1.
Coatings 2017, 7,133 For PTMOS based film, phosphorus was detected and it was hypothesized that the anion of PPA and phenyl group of PTMOS tends to form stable π-interactions in the sol-gel system [22].
Vinylphosphonic Acid (VPA)
Cooling of an aluminum alloy plate heat exchanger by seawater needs superior corrosion resistance. Trifluoroethylene polymers were preferred to form a fluorocarbon resin coating on aluminum alloy because of high adhesion to the organic phosphonic acid primer coating and high corrosion resistance. Vinylphosphonic acid (VPA) was used as the organic phosphonic acid primer because of its handle and its superior adhesion effect. The aluminum alloy (3003) was first anodically oxidized and then immersed in VPA aqueous solution (10 g/L) at 65 °C for 10 s or 120 s, followed by fluorocarbon resin coating and dried at 50 °C for 24 h. For the corrosion test, one hundred 1 mm 2 cross-cuts were formed on the coating, and a tape was attached on it and then peeled by the method specified in JIS K 5600-5-6. For samples with coatings without an organic phosphonic acid primer, all hundred cross-cuts of the coating were peeled off (0/100) upon tape peeling. However, for the coating with VPA (10 s or 120 s), all 100 cross-cuts remained unpeeled (100/100), which indicated an excellent anti-corrosion property [49].
1,12-Dodecyldiphosphonic Acid (DDP)
Self-assembled monolayers (SAMs) consisting of zirconium phosphate and their derivatives were used as the substitute to chromating on aluminum for ecological reasons [8]. Aluminum alloy (1100) was pretreated with 0.4 vol % of 3-aminopropyltriethoxysilane (APTES) in toluene by chemical vapor deposition. APTES is believed to prevent the formation of defects and provide protection against the harsh subsequent phosphonation treatment (pH ≤ 2). Subsequently, the substrate was immersed in 0.2 mol POCl3 and γ-collidine of acetonitrile solution. After ultrasonically rinsing with acetonitrile, the substrate was dipped in 5 mmol/dm 3 zirconyl chloride ZrCl2•8H2O of ethyl alcoholaqueous solution to form a zirconium phosphate (ZrP) layer. Ultimately the substrate was suspended in 2 mmol/dm 3 DDP of acetone-aqueous solution to form the three layered Zr-DDP coating as shown in Scheme 1. The Zr-DDP multi-layers film was tested for corrosion resistance by exposure to 5 wt % NaCl solution at 308 K for 48 h in the salt-spray test. The corrosion rate of multi-layer Zr-DDP coating is 0% (indicating no corrosion at all) and showed superior anti-corrosion properties compared to APTES coating (100%), POCl 3 coating (80%), and conventional ZrP coating (100%). After a 72 h salt spray test, Zr-DDP films were corroded with a corrosion rate of 60% whereas the other treatments showed 100% corrosion rate. The authors concluded that Zr-DDP films constructed by SAMs are promising surface finishing treatments to replace the conventional phosphating or chromating treatments [8].
Amino Trimethylene Phosphonic Acid (ATMP)
ATMP is known to have anti-scale performance due to its excellent chelating ability, and a low threshold inhibition and lattice distortion process. ATMP was used as an inhibitor to improve the anti-corrosion properties of aluminum alloy (AA2024-T3) [15]. The deposition bath was made by mixing ATMP in distilled water, tetraethylorthosilicate (TEOS), and ethanol in a 6:4:90 (v/v/v) ratio. In this work, the effect of three different pretreatments (acetic acid, acetic acid-NaOH, NaOH) and ATMP concentration on the corrosion protection were studied. SEM and energy-dispersive X-ray spectroscopy (EDS) analysis of the treated aluminum surface confirmed that acetic acid pretreatment of the aluminum surface was the most efficient pretreatment. The authors assumed that acetic acid pretreatment increased surface area of the alloy matrix, which led to more formation of favorable metalosiloxane and metal-phosphonic bonds to the surface.
Three different ATMP concentrations of deposition baths were investigated. The corrosion behavior of the treated aluminum was tested against 0.05 M NaCl solution for 48 h. EIS was used to assess the corrosion resistance, as mentioned in Section 3.1.1. High R HF (the resistance of the high frequency related to the coating layer) value and low CPE HF (the capacitance of the high frequency related to the coating layer) value indicated higher corrosion resistance. Results of the TEOS coating containing ATMP (5.00 × 10 −4 M) showed R HF (kOhm·cm 2 ): 81.4, CPE HF (µF·cm −2 ·s n−1 ): 16.2, whereas increasing or decreasing the ATMP concentration had an adverse effect on corrosion protection. In contrast, TEOS coating without ATMP resulted in R HF (kOhm·cm 2 ): 5.06, CPE HF (µF·cm −2 ·s n−1 ): 160.5, indicating no corrosion resistance at all. The authors believe that the corrosion protection was caused by the strong chemical bonding of phosphonic groups to the aluminum substrate [15].
Ethylenediamine Tetra Methylene Phosphonic Acid (EDTPO)
Modification of aluminum alloy (AA2024) with a well-known anticorrosive epoxy-polyamide paint in combination with vinyltrimethoxysilane (VTMS)/tetraethylorthosilicate (TEOS) nanocoating was used to improve its corrosion resistance and adhesion to organic layers. EDTPO, as a catalyst, was incorporated into VTMS/TEOS to form a sol-gel coating on the substrate surface and is believed to improve the formation of Al-O-P bonds which is beneficial for anticorrosive properties [17]. The concentration of EDTPO in the sol-gel deposition solution was 3.75 × 10 −5 mol·L −1 . EIS measurement was carried out in 3.5% NaCl aqueous solution. Substrates coated with epoxy and sol-gel film with or without EDTPO showed good corrosion resistance: The EIS results remained constant at 10 9.5 Ω·cm −2 up to 300 days, which indicated an excellent corrosion resistance. Whereas the values for the epoxy coating without sol-gel film started to decrease after 160 days and 300 days later, coating resistance was only 10 7.9 Ω·cm 2 . A cyclic accelerated corrosion test in 3.5% NaCl solution for 60 days was evaluated by ASTM D-1654. The results showed that the failure area of the coating with EDTPO was 0.8%, which indicates an excellent corrosion resistance compared to the coating without EDTPO (1.3%) or the coating without any sol-gel primer (32%). After accelerated corrosion, the pull-off test (UNE-EN-ISO 4624) showed that the adhesion reduction for the coating with EDTPO to be 0%. However, the adhesion of coating without EDTPO showed a reduction of 15%, and the adhesion of the coating without any sol-gel primer reduced to 40%. It was concluded that the EDTPO-modified silane coating (VTMS/TEOS) has excellent adhesion and anti-corrosion properties because of its barrier properties [17].
In a similar work, two different concentrations of EDTPO (3.75 × 10 −5 and 3.75 × 10 −4 mol·L −1 ) were incorporated in TEOS to form sol-gel deposition solutions, which are named TEOS/EDTPO 10 −5 and TEOS/EDTPO 10 −4 , respectively. Aluminum alloy (AA2024-T3) panels were immersed in these formulations to form an anti-corrosion film. As mentioned in Section 3.1.1, a high R HF value and low CPE HF value indicates a corrosion resistance. After seven days of soaking in the 0.05 mol·L −1 NaCl solution at 25 • C, EIS test results of TEOS/EDTPO 10 −4 was R HF (Ω·cm 2 ): 248.3, CPE HF (µF·cm −2 ·s n−1 ): 1.74 and for TEOS/EDTPO 10 −5 it was R HF (Ω·cm 2 ): 177.4, CPE HF (µF·cm −2 ·s n−1 ): 15.8. Thus, it was clear that TEOS/EDTPO 10 −4 could provide a better corrosion resistance. The substrate coated with TEOS/EDTPO 10 −4 was further investigated by soaking it in 0.05 mol·L −1 NaCl solution at 70 • C with various immersion times. After a 24 h exposure, the impedance modulus is lower than the measurements for 25 • C. Nevertheless, the corrosion resistance of the EDTPO containing coatings was still approximately six times higher than for TEOS-only coatings. In this work, it was also confirmed that EDTPO enhanced the corrosion resistance of the sol-gel coating even at a higher temperature [30].
In a separate work done by the same authors, the corrosion behavior of EDTPO and ATMP was compared in TEOS sol-gel coating on the aluminum alloy (AA1100) surface. Substrates were dip-coated in two different solutions: 3.75 × 10 −4 mol·L −1 EDTPO in TEOS ethanol solution and 5.00 × 10 −4 mol·L −1 ATMP in TEOS ethanol solution. EIS was used to evaluate the corrosion of coated substrates in a 0.05 mol·L −1 NaCl solution for seven days. The corrosion resistance of TEOS/EDTPO was R HF (Ω·cm 2 ): 141.0, CPE HF (µF·cm −2 ·s n−1 ): 2.45. For TEOS/ATMP it was R HF (Ω·cm 2 ): 113.7, CPE HF (µF·cm −2 ·s n−1 ): 1.65. It was concluded that both EDTPO and ATMP could provide considerable corrosion protection for AA1100. However, the corrosion resistance of EDTPO-containing coatings was higher than for ATMP-containing coatings [40].
1,2-Diaminoethanetetrakis-Methylenephosphonic Acid (DETAPO)
The adhesion and corrosion performance of silane sol-gel film coating on aluminum has been reported. Especially ATMP and EDTPO, as the phosphonic catalysts in a sol-gel system, have been well studied for the protection against corrosion [15,17,30,40]. Phosphonic derivative DETAPO, which has a similar structure has also been tested because of higher concentration of O=P-OH group in its structure compared to ATMP and EDTPO [23]. In this work, aluminum alloy (AA2024-T3) was coated with VTMS/TEOS sol-gel film containing DETAPO and the epoxy-polyamide resin. A cyclic accelerated corrosion test in 3.5% NaCl for 45 days was performed according to the ASTM D1654 method. The results showed that the failure area of DETAPO modified coating was around 10-11%, compared to the 35% failure area for the epoxy coating without any silane film. Pull-off tests after accelerated corrosion showed that adhesion reduction of DETAPO-modified silane coating was around 62.5-52.9%, on the other hand, the coating without any silane film reduced to 34.7% adhesion. It was clear that the DETAPO-modified coating can provide an outstanding corrosion resistance for aluminum surface [23].
(12-Ethylamino-Dodecyl)-Phosphonic Acid
Self-assembled monolayers (SAMs) were used to form thin organic adhesion promoter layers to replace the chromating process on aluminum. (12-ethylamino-dodecyl)-phosphonic acid was chosen as a model functional organic phosphonic acid in this work because of its phosphonic acid anchor group which acts as a reactive group for the aluminum surface, its aliphatic part acting as a hydrophobic spacer, and its amino head group serving as a reactive group to organic material [27]. The aluminum alloy (AlMg) was dip-coated in the 10 −3 M (12-ethylamino-dodecyl)-phosphonic acid solution of ethanol/water (3:1 vol %). The surface analysis by FTIR and surface plasmon resonance spectroscopy (SPR) confirmed that organophosphonic acid could spontaneously adsorb on the aluminum surface and subsequently form oriented layers. The adhesion promotion and corrosion resistance of the aluminum surface were confirmed by the acetic acid salt spray-test (ASS-test, DIN 50021) and filiform test (DIN 50024). Coated panels were scratched and exposed in a climate chamber of NaCl solution and acetic acid. After 1200 h exposure, the infiltrations for the amino phosphonic acid coating was less than 1 mm, which indicates excellent corrosion resistance compared to 8 mm infiltration for the uncoated substrate [27].
Aminopropyl Phosphonate (APP)
To replace harmful chromate conversion layers, environmentally-friendly aminopropyl phosphonate (APP), aminopropyl silane (APS), and hexamethyldisiloxane (HMDSO) were used to coat aluminum alloy (6016) [9]. As known, filiform corrosion (FFC) usually occurs at the interface of a polymer and an aluminum alloy. Hence, the corrosion resistance and adhesion promotion of these coatings were studied to find a suitable pretreatment for automotive applications. The influence of the rolling direction on filiform corrosion was also investigated. Substrates were dipped in 1 mmol APP solution with pH 7 for 1 h followed by an epoxy coating. APS was coated on substrates with the same dip-coating method and, for HMDSO, a plasma coating technique was used. Before the corrosion test, substrates were scribed perpendicularly or parallel to the rolling direction. The corrosion test was carried out by soaking scratched substrates in a vessel with concentrated HCl vapor for 20 min. FFC was evaluated based on filament initiating time and the number of filaments observed in the digital microscope. From visual observation, it could be confirmed that filaments grew predominantly along the rolling direction, which indicated that the rolling direction had an influence on the interfacial bonding. The FFC severities were as follows: APS > etching > APP > HMDSO plasma. The adhesion of the epoxy adhesive to the substrate was analyzed by a peeling test, and the results follow a trend for different coatings: HMDSO plasma > APS > APP > etching. It was concluded that the adhesion of the coating had a great influence on filament morphology and further on filament propagation, but an increase in adhesion of the coating was not a guarantee of resistance to FFC [9].
The corrosion performance of aluminum powder was enhanced by a novel functionalization with graphene oxide (GO) with phosphonic acid as a linker [11]. APP was used as the "link" agent to connect graphene oxide (GO) with aluminum powder. APP was added in a suspension of GO and deionized water. After stirring, refluxing, dialyzing, and drying, GO-APP was added in the aluminum powder suspension to form the oxide modified aluminum powder (GO-Al). Corrosion performance was tested in dilute hydrochloric acid. Since corrosion of aluminum generated hydrogen, corrosion behavior could be evaluated by detecting the amount and the yield time of hydrogen gas. The results showed that for the Al/HCl system, the hydrogen was detected after 100 min. However, in the GO-Al/HCl system, hydrogen was detected only after 140 min, which confirms enhanced the anti-corrosive performance of GO-Al. Through XPS, FTIR, field emission scanning electron microscopy (FE-SEM), and EDS, it could be confirmed that the flaky aluminum particle was successfully covalently bonded and covered by GO. In conclusion, GO as a barrier coating was well connected with epoxy through APP [11].
ω-(3-Phenylpyrrol-1-ylalkyl) Phosphonic Acid (Cn-Ph-P)
Cn-Ph-P was used to build SAMs on the aluminum surface (Al/Al 2 O 3 chemical vapor deposition on polished, p-doped Si wafers) because its phosphonic acid group could anchor to the aluminum surface and its pyrrole group could be used to achieve an in situ surface polymerization with further monomers [10]. The substrates were immersed in a Cn-Ph-P solution for different time intervals (5 min, 1 h, and 24 h) to form SAMs on the surface. After that, the pyrrole monomer was polymerized with the free terminal pyrrole group of the SAM by oxidants, like sodium or ammonium peroxidisulphate, followed by the growth of polypyrrole (PPY) on the surface. The contact angle was used to evaluate the adsorption behavior. It was inferred that a more hydrophobic surface (higher contact angles) was caused by phosphonic acid group reacting with surface and the terminal polymerizable group present on top of it. The surface contact angle rose up after adsorption of CnPhP, which indicated that CnPhP was oriented on the surface. The contact angles of C12PhP were slightly higher than C10PhP because of the stronger van der Waals interaction between the alkyl chains. C12PhP showed excellent stability after eight repeated cycling using the Wilhelmy method [10]. 4.1.10. ω-(2,5-Dithienylpyrrol-1-yl-alkyl) Phosphonic Acid (SNS-n-P) SNS-n-P as a model functional phosphonic acid was used to study the influence of varying alkyl chain length (SNS10P, SNS4P) to the adsorption on the aluminum surface (Al/Al 2 O 3 chemical vapor deposition on polished, p-doped Si wafers). The contact angle of SNS10P coated surface did not change after ten repeated cycles by the Wilhelmy method, which indicated that SNS10P had a very stable bond to the surface. However, the contact angle of SNS4P coating decreased significantly with the increasing number of repeated cycles. Thus, this comparison of analogue structures SNS10P and SNS4P showed that the alkyl chain length of the structures influences the bonding at the surface [10] Trichlorosilane or metal-ligand coordination has mostly been used to engineer surface wetting properties on metals. However, they were highly moisture sensitive due to hydrolysis and self-condensation of silanes [51][52][53]. Therefore, HDF-PA was employed to coat different self-assembled monolayers (SAMs) on metal oxide surfaces because phosphonic acid-functionalized molecules were stable in water, and built a stable metal-ligand coordination and hetero-condensation between phosphonate and substrate surfaces. Substrates with a 30 nm thick Al 2 O 3 film were dip-coated in 2 mM HDF-PA isopropyl alcohol for 1 h. After the water flow test, changes in surface contact angle were analyzed to evaluate the reliability of the chemical bond between SAMs and substrate surface. (Heptadecafluoro-1,1,2,2-tetrahydrodecyl) trichlorosilane (HDF-S) was also studied as a comparison. The bare Al 2 O 3 films showed contact angle values of 70.0 • ± 1.5 • , the contact angles of HDF-PA and HDF-S-coated Al 2 O 3 films were enhanced to 99.9 • ± 1.0 • and 102.7 • ± 2.4 • . After exposing the surface to 5 L of water droplets, contact angles of HDF-S coating remained around 102.7 • , whereas the contact angles of HDF-PA coating reduced from 99.9 • to 69.3 • . The results showed that HDF-S formed a more stable bond than HDF-PA, which may be due to the cross-linked siloxane network on the surface. A low-temperature (<150 • C) thermal annealing process was used to improve the HDF-PA bond formation. The results showed that after the water flow test, contact angles for 100 • C and 150 • C annealed substrates were stable at around 101 • C. Therefore, the durability of HDF-PA coatings could be improved by an additional thermal annealing process at 100-150 • C [19] PvPA was coated as a thin polymeric interfacial layer on aluminum alloy (AA 1050) surface by dip-coating and compared to the poly (acrylic acid) (PAA), poly (ethylene-alt-maleic anhydride) (PEMah) coated aluminum surface. After that, an epoxy (Resolution Epikote 1001) adhesive was coated on the modified substrate. The PvPA-based system showed poor adhesion performance in the pull-off test compared to PEMah, because PvPA forms a weakly-cured epoxy/polymer interphase [24].
Monostearyl Acid Phosphate (MSAP)
A novel ultrasonic assisted deposition (USAD) method was used to coat phosphate films on aluminum and compared with films formed by mild mechanical stirring [12]. MSAP was used in this study and its impact on the adsorption behavior of these films was evaluated. Simultaneously, the corrosion resistance and adhesion properties of the coatings were studied as well. Infrared spectroscopy (IR) and NMR were used to analyze the stability of the MSAP film, after soaking the substrates for 5 h at 100 • C in a reflux distillation column containing water and steam. It was observed in NMR spectra that the USAD substrates had resolved peaks at −7 and −14 ppm, whereas the chemical shift of the stirring coated substrates occurred at 0 ppm. Based on the previous NMR studies for alkylphosphate films on metal oxide surfaces [54][55][56][57][58], it was confirmed that the peaks at −7 and −14 ppm meant bidentate and tridentate interactions between the aluminum oxide surface and the phosphate. It was concluded that the USAD treatment favored tridentate interaction between MSAP and aluminum oxide [12].
Phosphoric Acid Mono-(12-Hydroxy-Dodecyl) Ester
From earlier studies, it was clear that cooperated pretreatments of phosphoric acid anodic oxidation (PAA) and an adhesion promoter (AP) (phosphoric acid mono alkyl ester) could enhance the adhesion and durability of the aluminum/epoxy bonding and thus could be considered as an alternative to the chromium-based pretreatment for aluminum [59]. In a more recent work, aluminum substrates were first anodized in a phosphoric acid solution to create PAA. Then the PPA substrates were dip-coated in 10 −3 M aqueous solution of phosphoric acid mono-(12-hydroxy-dodecyl) ester for 5 min at room temperature [41]. EIS and the floating roller peel test were used to study the corrosion performance of PAA and PAA/AP systems. After 48 h soaking in 3% NaCl solution at 80 • C, the EIS results showed that the PAA and PAA/AP pretreated surfaces had a strong influence on the crosslinking of the epoxy at the interphase. For the floating roller peel test, one rigid and one flexible coated substrate was first bonded with Delo-Duopox 1891 adhesive. The joints were then corroded according to accelerated method VDA 621-415. The peel test was performed based on DIN EN 1464. The results indicated that after aging, PAA/AP treated substrates showed higher peel strength (around 3.3 N/mm) than PAA treated substrates (around 2.8 N/mm). It was confirmed that AP could enhance the cross-linking of the epoxy coat. The authors suggested that the AP layer was well-ordered on the surface and the strong bonding of the phosphoric acid anchor group to PAA led to the good adhesion and corrosion resistance [41].
Fluoro Alkyl Phosphates (Zonyl UR)
Zonyl UR was deposited on aluminum plate and aluminum oxide powder as a corrosion inhibitor and adhesion promoter. Two different deposition methods were compared: The ultrasonic assisted deposition (USAD) or magnetic stirring. Environmental stability of the coating was observed in a reflux distillation column (100 • C). For aluminum plate, the change in IR spectra was used to evaluate the stability of the coatings, which showed that after 5 h refluxing, barely any changes occurred to the USAD-coated surface. In contrast, clear evidence of corrosion was observed for stirring deposition. For aluminum powder, IR spectra of stirring deposition exhibited broad features, which did not exist in USAD. Solid state nuclear magnetic resonance (NMR) was used to study the interactions between the phosphate group and the aluminum oxide powder. The results confirmed that USAD could provide a more homogeneous film with mostly tridentate interaction and low coverage of mono-and bidentate interaction [12].
Phenylphosphonic Acid (PPA)
PPA has been used as a self-phosphating agent and an acid catalyst in the formulation of a polyester-melamine coating for aluminum surface [14]. In this work, a chrome-free single-step in situ phosphatizing coating (ISPC) was used on aluminum alloy (3003 or 3105). This work is based on a patent published earlier, where for an ISPC, an optimum amount of an in situ phosphatizing reagent (ISPR) or a mixture of ISPRs is predisposed in the paint system to form a stable and compatible coating formulation [60]. It is believed that when a chrome-free single-step coating of the in situ self-phosphating paint is applied to a bare metal substrate, the phosphatizing reagent chemically and/or physically reacts in situ with the metal surface to produce a metal phosphate layer and simultaneously forms covalent P-O-C (phosphorus-oxygen-carbon) linkages with the polymer resin. Such linkages enhance the adhesion of the coating and suppress substrate corrosion. It was assumed that PPA could also provide phosphate formation on the metal surface in the in-situ phosphatizing treatment [14]. The aluminum panels were spray coated with polyester-melamine paint containing 1 wt % PPA and other formulations containing commercially available acid sources (phosphonosuccinic acid, para toluene sulphonic acid). Coated substrates were analyzed for corrosion behavior using EIS measurements, saltwater immersion, and pencil hardness tests. The paint on cured panels was removed by a razor and analyzed for the glass transition temperature (T g ) using differential scanning calorimetry (DSC). A higher T g correlates with a coating with a higher cross-linking density in the polymer. T g of 1 wt % PPA paint formulation was 22 • C. After the paint was exposed to 300 • C, T g increased to 65 • C indicating an increase in the crosslinking density of the polymer. For formulations containing para toluene sulphonic acid (commercial solution), the crosslinking density dropped as the T g dropped after severe heating. This resulted from the possible cleavage of polyester-melamine cross-links in favor of the melamine self-condensation in the paint films [61].
After immersion of the coated substrate in 3% NaCl for three days, EIS was used to study the corrosion behavior. It is known from previous work that a surface with an impedance at the low frequency below 10 7 Ω·cm 2 is considered a poor protective barrier [44]. The paint with 1 wt % PPA resulted in the impedance value of 10 10 Ω·cm 2 at low frequency. This result indicates the superior corrosion protection of PPA-sol-gel coatings. For saltwater immersion test, test panels were scribed with an "X" and then were soaked in a 3% NaCl solution for 66 days. After that, a tape was firmly pressed against the scribed area and pulled to remove. The saltwater corrosion test was evaluated using the ASTM D3359A method, which showed that there was no discoloration of the paint and no paint was removed with the tape. Only very few tiny blisters (Ø < 1 mm) around the scribe were observed, which may result from paint defects. The pencil hardness of the paint (ASTM D3363) was F. The paint was without any discoloration. The authors concluded the paint with 1 wt % of PPA provided excellent corrosion resistance [14].
2-(Phosphonooxy) Benzoic Acid (Fosfosal)
Fosfosal was used as an acid additive in the polyester-melamine paint and was used in the in situ phosphatizing treatment on aluminum alloy (3003) [14]. Substrates were spray-coated with polyester-melamine paint containing different concentrations (0.5%, 0.75%, and 1%) of Fosfosal. DSC was used to evaluate the cross-linking density of polymer by analyzing the glass transition temperature (T g ). The DSC program involved annealing at 80 • C, then scanning from −50 • C to 300 • C with a heating rate of 10 • C /min. Higher T g means higher cross-linking density. EIS was used to study the corrosion performance of coated substrates after three days in 3% NaCl immersion. As mentioned earlier, EIS results of lower than 10 7 Ω·cm 2 at low frequency meant poor anti-corrosion properties. Corrosion resistance was further studied after saltwater immersion. Coated substrates were scribed with an "X" and immersed in 3% NaCl solution for 66 days. After drying, a tape was firmly pressed against the "X" area and pulled to remove. The corrosion resistance was evaluated by ASTM method D3359 A The pencil hardness of the painted Al panels was measured using ASTM method D3363. The results of the corrosion test are summarized in Table 7. The corrosion resistance (saltwater immersion) for 1% Fosfosal coatings could not be estimated with the ASTM method. For 0.5% Fosfosal, saltwater immersion results were not reproducible; one panel had no paint removal and was evaluated as 5A in the ASTM D3359 A test. One panel had a clump of blisters of size~Ø 2 mm and the rest had no blistering. DSC results for coatings with 0.5% Fosfosal showed that cross-linking density increased and EIS results showed 0.5% Fosfosal provided sufficient corrosion protection. In case of pencil hardness test, coatings containing 1% and 0.75% Fosfosal (4H) were harder than for 0.5% Fosfosal (HB). However, EIS results for coatings containing 1% Fosfosal were not reproducible and saltwater immersion results of 0.75% Fosfosal were worse than for 0.5% Fosfosal. Therefore, the authors suggested that 0.5% Fosfosal should be used in the in situ phosphatizing treatment [14].
Phosphonosuccinic Acid (PPSA)
PPSA, as an acid additive, was added to the polyester-melamine paint coated on the aluminum alloy (3003) surface. The EIS accelerated corrosion test was used to study the corrosion performance of these coatings. Polyester-melamine paint with 5% PPSA could not be fully cured. Thus, paint with 1 wt % PPSA was coated on the substrate and cured, but these coatings exhibited poor coating properties. EIS results showed that the impedance at low frequency was in the order of 10 6 Ω·cm 2 . After one week immersion in 3% salt solution, many blisters were observed on the paint surface. PPSA performed badly as a catalyst in the polyester-melamine paint system [14].
Conclusions
Environmental issues with existing pretreatments for aluminum, together with the need to develop new materials with higher mechanical and chemical performance, is driving the research and industrial communities to develop new pretreatments based on phosphonic acids. The new treatments are shown to have higher mechanical performance and corrosion protection; however, much work in adaptation to industrial processes and their real environmental impact need to be evaluated to demonstrate the commercial feasibility of these treatments. In many cases, the phosphonic acids are incorporated in sol-gel coatings, which look simple in batch-scale application in laboratory research. However, it is not clear if such treatments can be adapted to continuous handling in industrial production. The use of hybrid systems, such as phosphonic acid-modified graphene oxide, is quite promising. However, the economics of such treatments will determine its potential commercial exploitation. We believe that the combination of various organic and inorganic conversion technologies for aluminum is needed to obtain industrially-feasible solutions. The development of novel phosphonic acid chemistry with a focus on self-healing, superior galvanic protection, and higher mechanical performance will have a strong focus in the future. New phosphonic acids should be screened for their toxicological profile and environmental impact before any possible commercial exploitation. | 15,717.8 | 2017-08-23T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Anti-proliferative effects of mesenchymal stem cells (MSCs) derived from multiple sources on ovarian cancer cell lines: an in-vitro experimental study
Mesenchymal stem cells (MSCs) have surfaced as ideal candidates for treatment of different therapeutically challenging diseases however their effect on cancer cells is not well determined. In this study, we investigated the effect of MSCs derived from human bone marrow (BM), adipose tissue (AT), and umbilical cord derived MSCs (UC-MSCs) on ovarian cancer. Measurements of ovarian tumor marker proteins were computed by ELISA. Proliferative, apoptosis and anti-inflammatory effects of the MSCs were measured by Flow cytometry (FCM). MMPs expression was measured by RT-PCR. The co-culture of cancer cell lines OVCAR3, CAOV3, IGROV3 and SKOV3 with the conditioned media of MSCs (CM-MSC) and MSCs showed an increase in cellular apoptosis, along with a reduction in the level of CA-125 and a decline of LDH and beta-hCG. A decrease in CD24 of the cancer cell lines in co-culture with the CM-MSCs showed a reduction of the cancer tumorigenicity. In addition, the invasion and aggressiveness of cancer cell lines was significantly decreased by CM-MSC; this was translated by a decrease in MMP-2, MMP-9, and CA-125 mRNA expression, and an increase in TIMP 1, 2, and 3 mRNA expression. An increase in IL-4 and IL-10 cytokines, and a decrease in GM-CSF, IL-6, and IL-9, were also noted. In conclusion, mesenchymal stem cells derived from different sources and their conditioned media appear to have a major role in inhibition of cancer aggressiveness and might be considered as a potential therapeutic tool in ovarian cancer.
Introduction
Ovarian cancer is the lethal of all gynecologic malignancies. Expected projections in 2018 estimate approximately 22,240 new diagnoses and 14,070 female deaths in the United States alone [1]. Mortality in ovarian cancer is predominantly due to tumor recurrence and acquired chemoresistance [2][3][4]. Tumor recurrence is common because majority of people are diagnosed at advanced stages of the disease [5,6].
Mesenchymal stem cells (MSCs) have emerged as ideal agents for the restoration of damaged tissues in clinical applications due to their undifferentiated cell characterization, self-renewal ability with a high proliferative capacity, their paracrine, trophic effect and their mesodermal differentiation potential [19,20]. Additionally, they produce bioactive anti-inflammatory agents and support regeneration of injured tissues [19,21,22].
The therapeutic potential of MSCs has been explored in a number of phase I, II, and III clinical trials [23], of which several were targeted against graft versus-host disease and to support engraftment of hematopoietic stem cells [24,25]. Yet, very few of these trials use MSCs to treat tumor diseases [23,26,27], such as gastrointestinal, lung, and ovarian cancer. As for the study targeting ovarian cancer, MSCs expressing cytokines were used as therapeutic payload [23] and were derived from bone marrow.
Bone marrow (BM) was the first source of MSCs reported as a potential candidate for cell replacement therapy [28] followed by adipose tissue (AT) [29] and more recently umbilical cord (UC) [30,31]. Compared to BM, AT and UC are favorably obtained by less invasive methods. AT contains similar stem cells termed as processed lipoaspirate (PLA) cells, which can be isolated from cosmetic liposuctions in large quantities and grown easily under standard tissue culture conditions [26]. MSCs derived from UCs have the potential to be cultured longest and have the highest proliferation capacity compared to the other sources [26].
Effective curative therapy for ovarian cancer has yet to be developed according to the current status of MSCbased therapeutic approaches for cancer. This study seeks to investigate the potential of mesenchymal stem cells derived from AT, BM, and UC as promising sources of anti-tumor effects on ovarian cancer.
Collection of MSCs
Ten collections were made from each MSC source (BM, UC, and AT). BM aspirates were obtained by puncturing the iliac crest of participants ranging in age from 30 to 60 years at the hematology department of the Middle East Institute of Health University Hospital. UC units were collected from the unborn placenta of full-term deliveries in a multiple bag system containing 17 mL of citrate phosphate dextrose buffer (Cord Blood Collection System; Eltest, Bonn, Germany) [26] and processed within 24 h of collection. AT was obtained from participants ranging in age from 26 to 57 years, who underwent liposuction at Hotel Dieu de France Hospital (HDF) (Beirut, Lebanon). The Saint-Joseph University and the HDF Ethics Review Board approved the retrieval of all MSC collections (approval reference number: CEHDF1142), and all patients were asked to read and approve/sign informed consent forms prior to any participation.
Isolation and culture of mononuclear cells from BM
The aspirates were diluted 1:5 with 2 mM ethylenediaminetetraacetic acid (EDTA)-phosphate-buffered saline (PBS) (Sigma-Aldrich). The mononuclear (MNC) fraction was isolated by density gradient centrifugation at 435 g for 30 min at room temperature using Ficoll-Hypaque-Plus solution (GE Healthcare BioSciences Corp) and seeded at a density of 1 × 10 6 cells per cm 2 into T75 or T175 cell culture flasks (Sigma, Aldrich). Within 3 days after isolation, the first change of medium was accomplished. The resulting fibroblastoid adherent cells were termed BM-derived fibroblastoid adherent cells (BM-MSCs) and were cultivated at 37°C at a humidified atmosphere containing 5% CO 2 . The expansion medium consisted of Dulbecco's modified Eagle's medium-alpha modification (Alpha-MEM) + 10% fetal bovine serum (FBS; Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA) and 5% penicillinstreptomycin-amphotericin B solution (PSA: Hyclone; GE healthcare, Logan, UT, USA). BM-MSCs were maintained in Alpha-MEM + 10% FBS and 5% PSA until they reached 70 to 90% confluency. Cells were harvested at subconfluence using Trypsin (Sigma-Aldrich). Cells at the second passage and thereafter were replated at a mean density of 1.3 ± 0.7 × 10 3 /cm 2 .
Isolation and culture of MSC from human umbilical cord Wharton jelly Umbilical cord was collected in PBS supplemented with 10% PSA and transferred to the laboratory in a maximum of 12 h. After washing, cord samples were cut into 1-2 cm sections, the umbilical vessels were removed (artery and veins), and Wharton's jelly was collected and minced into pieces, then digested by collagenase overnight and then cultured in flasks.
Nonadherent cells were removed 12 h after initial plating. The same culture conditions and media were applied as described for BM-MSCs. Adherent fibroblastoid cells only (UCMSC) appeared as CFU-F and were harvested at subconfluence using Trypsin (Sigma-Aldrich). Cells at the second passage and thereafter were replated at a mean density of 3.5 ± 4.8 × 10 3 /cm 2 .
Isolation and culture of PLA cells from AT
To isolate the stromal vascular fraction (SVF), lipoaspirates were washed intensely with PBS containing 5% of PSA. Next, the lipoaspirates were digested with an equal volume of 0.075% collagenase type I (Sigma-Aldrich) for 30-60 min at 37°C with gentle agitation. The activity of the collagenase was neutralized with DMEM containing 10% fetal bovine serum (FBS; Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA). To obtain the high-density SVF pellet, the digested lipoaspirate was centrifuged at 1,200 g for 10 min. The pellet was then resuspended in DMEM containing 10% FBS and filtered through a 100 μm nylon cell strainer (Falcon). The filtered cells were centrifuged at 1,200 g for 10 min. The resuspended SVF cells were plated at a density of 1 × 10 6 /cm 2 into T75 or T175 culture flasks. Nonadherent cells were removed 12-18 h after initial plating by intensely washing the plates. The resulting fibroblastoid adherent cells, termed AT-derived fibroblastoid adherent cells (ADMSC), were cultivated under the same conditions as described for BM-MSCs. ADMSCs were harvested at subconfluence using Trypsin (Sigma-Aldrich). Cells at the second passage and thereafter were replated at a mean density of 1.8 ± 3.1 × 10 3 /cm 2 .
Co-culture of MSCs with cancer cell lines
Human ovarian epithelial cancer cell lines SKOV3, OVCAR3, IGROV3, and CAOV3 were purchased from the American Type Culture Collection (ATCC; Manassas, VA, USA) and cultured in DMEM/F12 supplemented with 10% FBS supplemented with 5% PSA. The cells were cultured at 37°C in a humidified atmosphere with 5% CO 2 .
Conditioned media (CM) preparation
ADMSC, BM-MSC, and UCMSC were cultured with DMEM/F12 supplemented with 5% PSA at subconfluency. Thereafter, the supernatant, containing all released cytokines and chemokines to be studied, was collected.
Co-culture maintenance
Only ADMSC, BM-MSC, and UCMSC before passage 3 were used for co-culture. OVCAR3, SKOV3, IGROV3, and CAOV3 were cultured in direct contact with ADMSC, BM-MSC, and UCMSC cells in DMEM/F12 with 5% PSA and in a sterile humidified incubator with 5% CO 2 at 37°C for 48 h, and with the supernatant under the same conditions. The co-culture ratio of ADMSCs, BM-MSCs, and UCMSCs to SKOV3, OVCAR3, IGROV3, and CAOV3 cells was 1:1. Cell lines were considered the control group and underwent the same culturing time and conditions.
Cytokine analysis
For the cytokine analysis, we used the MACSplex Cytokin12 kit. Supernatants were mixed to capture specific beads for each cytokine: granulocyte/macrophage colony-stimulating factor (GM-CSF), interferon (IFN)-α, IFN-γ, interleukin (IL)-2, IL-4, IL-5, IL-6, IL-9, IL-10, IL-12p70, IL-17A, and tumor necrosis factor (TNF)-α. Antibodies conjugated with PE were added and incubated for 2 h at room temperature and away from light. After centrifugation, the bead-containing pellets were resuspended. The MACSQuant® express mode was used to perform flow cytometric acquisition and data analysis. Background signals were resolved by analyzing beads only incubated with the cell culture medium. The background signals were subtracted from the bead signals incubated with supernatants.
RNA extraction and quantitative real-time RT-PCR
Total RNA was extracted from samples using QIAamp RNA extraction Kit (Qiagen Inc., Valencia, CA, USA). RNA quality and yields were analyzed using nanodrop. Complementary DNA (cDNA) was synthesized from 500 ng of total RNA in a 20 μL reaction solution using iScript™ Cdna-synthesis Kit (Bio-Rad Laboratories, CA).
Quantitative real-time PCR was performed with the iQ™ SYBR® Green Supermix (Bio-Rad Laboratories, CA) in triplicate. The reaction conditions were: polymerase activation at 95°C for 5 min, 40 cycles of denaturation at 95°C for 20 s, and annealing and extension at 62°C for 20 s. The relative quantification of gene expression was normalized to the expression of endogenous GAPDH. Real-time PCR was performed with multiple https:// blast.ncbi.nlm.nih.gov/blast.cgi sequences as indicated in Table 1.
Apoptosis test
Apoptosis was evaluated by staining cell cultures with 10 μL Annexin V (fluorescein isothiocyanate: Miltenyi-Biotec Annexin V-FITC kit) at 4°C for 20 min in the dark, followed by counterstaining with 5 μL propidium iodide (PI) at room temperature for 5 min in the dark. Detection by flow cytometry was analyzed using a MACSQuant analyzer device for each 10 4 cell sample, and calculation of apoptosis cells was performed using MACSQuant computer software.
Examination of apoptosis was performed by 4, 6diamidino-2-phenylindole dihydrochloride (DAPI) staining. Cell lines were cultured on glass slides for 48 h with ADMSC, BM-MSC and UCMSC or their supernatant, while the controls of cells were cultured in serum-free media. After a 48 h incubation period, the slides were rinsed with PBS and fixed in 4% formaldehyde for 10 min at room temperature. Following two washes with PBS, cells were stained with DAPI staining diluted in PBS for 15 min. The slides were then visualized on a fluorescence microscope.
Tumor markers assay: CA-125, CA-19-9, AFP, CEA, LDH, and beta-hCG CA-125, CA-19-9, AFP, CEA, LDH, and beta-hCG were measured by ELISA. The supernatant and the antibody (monoclonal biotinyle) form a complex after the interaction of biotin and streptavidin during incubation for 60 min. The excess was eliminated by washing and an enzyme-antibody complex is added. The complex antibody-enzyme formed the final sandwich complex. After incubation, the excess was washed again. Afterwards, sulfuric acid was added to stop the reaction and the solution's color transformed from blue to yellow. The color intensity was directly correlated to the concentration of samples. The absorbance was read at 450 nm.
Statistical analyses
Statistical significance was determined based on a paired T-test and one-way ANOVA test. P values below 0.05 were considered as statistically significant.
Phenotyping and characterization of BM, UC, and AT derived MSCs
At an initial plating density of 1 × 10 6 cells per cm 2 , both BM-and AT-derived MSCs formed a monolayer 4-5 days after initial plating. In contrast, UCMSCs were initially detected 2-4 weeks after plating when applying the same initial plating density. Regardless of the cell source, a fibroblastic cell like morphology was detected in all cell lines (Fig. 1a).
For further characterization of the MSCs, surface protein expressions were examined by flow cytometry. All MSCs, derived from the three different sources expressed CD44, CD73, CD105, and CD90 and were negative for CD14, CD34, CD45, HLADr, CD133, and CD24. CD105 was more expressed on BM-and ADMSCs than on UCMSCs (Fig. 1c, d, e, f ).
Differentiation potential of MSCs
To further confirm the identity of our cell lines the cells were differentiated into chondrocytes, adipocytes and osteocytes as seen in the (Fig. 2a, b, c) where Oil O red indicate the presence of adipocytes, Alizarin red indicate osteocytes formation and Alcian blue indicate chondrocytes formation.
Immunophenotyping and characterization of cancer cell lines
After isolation and plating the cancer cell lines OVCAR3 and IGROV3 showed a clustered morphology whereas SKOV3 and CAOV3 showed a fibroblastic cell morphology (Fig. 1b). Flow cytometry analysis shows that the above cancer cell lines express similar cell markers as our MSCs except the CD24 which is known to modulate growth and differentiation of cancer cells. OVCAR3, and CAOV3 were found to be positive for CD44, CD73, CD105, CD90, CD133 and CD24 and negative for CD14, CD34, CD45, HLADr. SKOV3 was positive CD44, CD73, CD105, CD90, CD133 and negative for CD14, CD34, CD45, HLADr and low for CD24. IGROV3 was positive for CD44, CD73, CD105, CD90, and CD24 and negative for CD133 CD14, CD34, CD45 and HLADr. (Fig. 1c, d, e, f ).
Inhibition of tumor markers by MSCs derived from AT, BM and UC
In order to study the effect of MSCs on Tumor Cell marker flow cytometry analysis were performed on CD24 and CD44. Our results showed a significant decrease in the level of CD24 when OVCAR3, CAOV3, IGROV3 and SKOV3 were co-cultured with BMMSC, UCMSC and ADMSC and CM-MSCs. (p < 0.0001) (Fig. 3a, b, c, d). CA-125, CEA, AFP, LDH, CA-19-9, and beta-hCG are known to be expressed in ovarian cancer and used as tumor markers. We performed an ELISA test to detect their level when co-cultured with MSCs. A significant decrease in CA-125 expression in OVCAR3, SKOV3 and CAOV3 is seen when incubated with the supernatant of CB and AT (75-90%, p = 0.0017). CA-125 is not expressed in IGROV3. Also, a decrease in the secretion of LDH (10-20%, p = 0.0003) and beta-hCG (16-20%, p = 0.04) were observed in all cell lines. Similar results were found by co-culture of cell lines in direct contact with the MSCs, however it was more drastic with the CM-MSCs (Fig. 3e, f, g, h).
Inhibition of cell proliferation and apoptosis
Cancer cells in direct co-culture with ADMSC and the CM-MSCs showed that ADMSCs, BM-MSCs, and UCMSCs induced apoptosis of SKOV3, IGROV3, CAOV3, and OVCAR3 cells. Flow cytometry was conducted to identify changes in the apoptosis rate of SKOV3, OVCAR3, IGROV3, and CAOV3 cells in coculture with MSCs in direct contact and with the supernatant, according to Annexin V staining (Fig. 4a, b, c, d, e, f ). Apoptosis was more significant with the supernatant derived from ADMSC compared to the supernatant derived from BM-MSC and UCMSC. This result was confirmed by DAPI blue fluorescence staining (Fig. 4g, h, i, j, k).
Effect of MSCs on invasion and metastasis
Migration and invasion are initially controlled by the dysregulated expression of MMPs and tissue inhibitor metalloproteinases (TIMPs). To examine whether TIMPs contribute to the decreased migration and invasion capacity of OVCAR3, CAOV3, IGROV3 and SKOV3 cells upon MSCs or CM-MSCs co-culture, we examined TIMP-1, − 2 and − 3 expression by real time PCR. We observed that MSCs and their CM significantly increased TIMP-1, − 2 and − 3 mRNA levels and decrease the MMP-2, MMP-9 and CA-125 mRNA expression. (Fig. 5a, b, c, d).
Anti-inflammatory effect of MSCs
The analyses of the cytokine profiles determined by flow cytometry on the collected supernatants of ADMSCs, BM-MSCs, and UCMSCs in co-culture with OVCAR3, IGROV3, CAOV3, and SKOV3 in direct contact and with the CM-MSCs showed that the levels of antiinflammatory cytokines (IL-4, IL-10) were elevated (10-12%, p = 0.001) contrary to the levels of the proinflammatory cytokine IL-9 which decreased. Also, the levels of GM-CSF and IL-2 significantly decreased (45-50%, p = 0.001) in OVCAR3, SKOV3, and CAOV3, when co-cultured with the supernatant of ADMSCs increased in IGROV3 (25%, p = 0.0001) (Fig. 6a, b, c, d).
Discussion
The mortality rates of ovarian cancer are increasing and no known therapy has proven efficacy in inhibiting or delaying the progression of the disease. Recently MSC has emerged as a potential therapeutic cell therapy for many diseases including some form of cancer [32,33]. Many controversies have been reported concerning the use of MSCs in cancer, few reporting the inhibition of the tumor invasiveness and metastasis [34,35] and others reporting an increase in tumorogenecity [36,37]. In our paper we have proven that MSCs derived from AD, BM and UC have anti-proliferative effects and antiapoptotic on ovarian cancer. In fact, to our knowledge, this is the first in vitro study to explore the role of MSCs derived from various sources (ADMSC, BM and UC on different ovarian cell lines (OVCAR3, CAOV3, IGROV3 and SKOV3). After coculturing our cell lines with MSCs alone or with the MSC conditioned media (CM), the cancer markers CA-125, LDH, beta-HCG decreased significantly, accompanied by a decrease in the proliferation of the four cell lines. Interestingly the CD24 depicting the aggressiveness of CAOV3 and SKOV3 was inhibited more than 60%, this decrease in invasiveness was confirmed by a drastic decrease in MMP-2 and MMP-9 and Our results are consistent with many reports suggesting that co-culture of UCMSCs and tumor cells in vitro had roles of apoptosis induction and proliferation inhibition of tumor cell [33,37]. For example, UCMSCs displayed the obvious inhibition effects on tumorous growth in the research of malignant tumors including, melanoma, colon carcinoma, hepatocarcinoma, mammary cancer, hematological malignancy and pulmonary carcinoma [37][38][39][40] Xiufeng Li et al., showed that, with the prolonging of co-culture time of Caov-3 and hUCMSCs, the number of apoptotic CAOV3 cells became more and more; and their proliferation was inhibited remarkably where stem cell grew, ovarian cancer cells were enclosed first, then gradually necrosed and developed into clusters of granules [33]. This confirm that MSCs can alter the metastatic profile of ovarian cancer [33,41].
CD24 is the most expressed marker in malignant hematological pathologies and several solid tumors [42][43][44]. Its expression correlates with cancer aggressiveness, differentiation, invasion, migration, tumorigenicity, and resistance in vitro [42]. We found by flow cytometry analysis that CD24 was significantly decreased when cocultured with MSCs and MSC-CM however the results was more significant with CM of MSCs, this is consistent with the latest theory stating that the therapeutic effects of MSCs are due to their paracrine effects and not due to their differentiation capacity [45][46][47]. In effect, in the report of Caplan 2019 [48] he suggested changing the name of Stem cell to medicinal signaling cells. The array of secretion or secretome have multiple therapeutic effects such as anti-inflammatory, anti- proliferative and anti-fibrotic effect beside many other roles [49][50][51]. The effects of the secretome has shown as anti-proliferative by many investigators such as Kalamegan et al., where they showed that Human Wharton's Jelly Stem Cell (hWJSC) extracts inhibit ovarian cancer cell lines OVCAR3 and SKOV3 in vitro by inducing cell cycle arrest and apoptosis [52], in another study UCMSCs secretome have showed to significantly inhibit the proliferation of Skov-3 cells in addition the survival rates of ovarian cancer cells decreased with the exposure time of UCMSCs culture supernatant ie secretome [31,33].
One of the factor secreted by MSCs are TIMPs known to play a major role in cell death and in the inhibition of the progression and metastasis of numerous cancer including Ovarian cancer [53,54] . Inhibiting metalloproteinase might be one of the possible ways to prove the ability of MSCs to decrease and fight the aggressiveness of this cancer [55,56]. We report in our study that the MSC-CM decreased TIMPs-1, 2, and 3, while increasing MMP-2, MMP-9. This confirm the assumption that MSCs secretome play a crucial role in cell death while inhibiting cancer progression. The decrease in MMPs and upregulation of TIMPs were also confirmed by various research groups, while others mention either no effect or a decrease in these parameters [57][58][59].
There are many reports declaring the link between cancer and inflammation emphasizing that chronic inflammation contributes to tumor initiation and progression [60,61]. The anti-inflammatory cytokines play substantial role in cancer [62]. Controversial results have been reported where IL-4 and IL-10 either support or hinder tumor progression [63,64]. In fact, it has been indicated that a lack of Il-10 allows induction of pro-inflammatory cytokines hampering anti-tumor immunity [65], and that an increased level might be used as diagnostic biomarker in certain cancer such as stomach adenocarcinoma [66]. Tanikawa et al.; have shown that a lack of IL-10 promotes tumor development, growth and metastases [63]. This was explained by the ability of Il-10 to inhibit inflammatory cytokines and the development of Treg cells and myeloid derived suppressor cells [65]. It was also that disruption from the interaction between IL-10 and it the receptors that may lead to enhanced inflammation which could promote tumor growth [63]. Il-4 was also shown to suppress cancer-directed-immunosurveillance and enhance tumor metastasis [38,67,68], while overexpression of IL-4 suppressed tumor development, tumor volume and weight in mice melanoma models [69]. Based on our study results, we support the anti-inflammatory effects of the cytokines and we think that by inhibiting the secretion of proinflammatory cytokines IL-10 and IL-4 will contribute to tumor suppression. We think that the paradoxal roles of the above mentioned cytokines may depend on the expressing cells as well as the molecular environments [70]. We have shown that MSCs and MSCs CM induced a significant increase in the level of IL-4 AND IL-10 in the 4 ovarian cancer cell lines.
Finally, our results suggest that MSCs and CM-MSCs inhibit tumor progression in the four different ovarian cancer cell lines. We think it is the toxic, pro-apoptotic effect of the secretion of stem cells full of proteins and growth factors with known and unknown antiinflammatory and apoptotic activities that may contribute to the antitumor outcome [71]. Further studies are also needed to elucidate the underlying mechanism of The results were displayed as percentage of controls, Error bars represent the means ± SD, n = 5; * p = 0.051, **p = 0.005, ***p < 0.001,****p < 0.0001 anti-cancerous activity. A possible theory could be that the secreted cytokines or the cell to cell contact between MSCs and cancer cells stimulate the factors responsible for tumor reversions such as translationally controlled tumor protein TCTP and push the cells to acquire the knowledge on how to escape malignancy [72,73].
Conclusion
In conclusion, mesenchymal stem cells derived from different sources and their secretions or conditioned media appear to have a major role on inhibition of cancer aggressiveness. Although many controversies have been reported on the role of stem cells in cancers, and how crucial is that researchers continue to examine the roles and mechanisms of MSCs in tumor progression to evaluate the therapeutic potential of MSCs and to control cancer progression; the stem cells secretions arise as a new research pathway to investigate as a potential therapeutic tools for many diseases including tumor driven.
Authors' contributions CK perform all the experiences, analyzing results and statistics and contribute in writing, AI contribute to provide cells from patients, RS assist in the experience, MM contribute in experience design and interpretation of the results, AA interpreted and analyzed the ELISA assays, JT assist in all the experiences and contribute in the statistical analysis, JH was a major contributor in writing the manuscript. All authors read and approved the final manuscript. | 5,609.8 | 2019-07-27T00:00:00.000 | [
"Medicine",
"Biology"
] |
Light transmission performance of translucent concrete building envelope
Abstract Energy efficient building envelopes are essential for sustainable development in civil engineering and architecture. In this preliminary investigation, a structural building envelope that is load bearing is developed for daylight harvesting. A translucent concrete panel (TCP) design is constructed using optical fibers (OFs) to transmit light and common concrete mix design. A steel mesh is embedded in the TCP to increase its structural load bearing capacity. It has the potential to save energy and reduce carbon footprint by collecting, channeling and eventually scattering the sunlight. Constructability issues including mechanical and optical losses are analyzed and discussed. Numerical models of the single OF and the whole TCP are developed using ray tracing software and the light transmission mechanisms are analyzed. Nonimaging sunlight collectors, namely compound parabolic concentrator (CPC), together with the OFs represent an efficient system for harvesting and guiding the sunlight into the interior spaces. The light transmission of a model made out of a CPC and an OF is evaluated from an energy efficiency point of view.
PUBLIC INTEREST STATEMENT
Energy efficient building envelopes are essential for sustainable development in civil engineering and architecture. A translucent concrete panel (TCP) is constructed using optical fibers (OFs) to transmit light and common concrete mix design. It has the potential to save energy and reduce carbon footprint by collecting, channeling and eventually scattering the sunlight. Constructability issues including mechanical and optical losses are analyzed and discussed. Numerical models of the single OF and the whole TCP are developed using ray tracing software and the light transmission mechanisms are analyzed. Compound parabolic concentrator (CPC) and the OFs represent an efficient system for harvesting and guiding the sunlight into the interior spaces. The light transmission of a model made out of a CPC and an OF is evaluated from an energy efficiency point of view. The results will help to develop energy efficient buildings.
Introduction
The translucent concrete panel (TCP) in this study has a certain amount of optical fibers (OFs), either glass or plastic core, embedded and regularly aligned in concrete. OFs transmit light through the concrete panel. The "translucency degree" of the TCP depends on the density of the embedded OFs. It is to be noted that for plastic OF, the core can be polystyrene or polymethylmethacrylate and the cladding is generally silicone or Teflon. On the other hand, for glass OF, both the cladding and the core are made of Silica with small amount of dopants, e.g., Boron or Germanium, to change its index (Lacy, 1982).
TCP is a new building material; it came into being at the beginning of this century for building decoration. In 2001, Hungarian architect Aron Losonzi invented LiTraCon™, the first commercially available form of translucent concrete (Litrocon Ltd 2012). It is a combination of OFs and fine concrete, combined in such a way that its appearance is perceived as homogeneous. LiTraCon™ is manufactured in blocks and panels for decoration. Litracon pXL® is a slightly different product, offered by the same company, that uses Polymethyl methacrylate in place of glass OFs. In Shanghai 2010 EXPO, Italy took the opportunity to build its pavilion out of translucent concrete (TC) using about 4,000 i.light® blocks, each 100 cm × 50 cm × 5.0 cm (Bates, 2010;Hipstomp, 2010). The blocks were heavier than glass panel as the façade in buildings. In the standard products of the same manufacturer, there are also 120 cm × 60 cm panels with 1.5 cm or 3.0 cm thickness (Lucem Lichtbeton, 2018). These thinner products are suitable for building façades and are generally laminated with light-emitting diode plates for commercial advertising. Another product features plastic fibers arranged in a regular grid, namely Pixel Panels, developed by Bill Price of the University of Houston (Klemenc, 2011). These panels transmit light from one face of a wall to the other, but in a pattern, where light that shines through the panel resembles thousands of tiny stars in a night sky. University of Detroit-Mercy also developed a process to produce translucent panels made out of Portland cement, sand and reinforced with a small amount of chopped fiberglass [6]. The primary focus of the TC technology has previously been on its aesthetic appeal and its application in artistic design. Few people study on its light transmitting, mechanical and self-sensing properties and long-term durability of the material (Ahuja et al., 2015;He et al., 2011;Mead & Mosalam, 2017;Mosalam & Nuria, 2018). However, a comprehensive experimental study of TCP is not yet developed to address the issue of daylight harvesting property.
Nowadays, sustainable development has become an inevitable trend in every walk of society (Loonen et al., 2017), including civil engineering and architecture. Therefore, development and use of sustainable materials, which are green, energy efficient and low-cost, are gaining more interest. The building envelope defines the interior environment. Thus, the energy efficiency of the envelope affects the efficiency of the entire building system. Furthermore, if the envelope can capture more daylight into the building, the electric lighting load can be reduced and further energy savings are achievable. Compared to a traditional electric lighting system, daylight is more energy efficient and more appealing for a healthy environment and human productivity and comfort because it contains almost the full spectrum of the sunlight (Edwards & Torcellini, 2002).
TCPs with sunlight concentrators provide the possibility of concentrating and transmitting sunlight into the indoor environment. In this way, the building envelope subsystem saves energy and reduces carbon footprint by collecting and distributing sunlight without reducing its bearing capacity. Nonimaging sunlight collectors, e.g., compound parabolic concentrator (CPC) (Chaves, 2008;Winston et al., 2005), and light conduits harvest and guide sunlight into the interior spaces. Artificial lighting load can then be significantly reduced. Construction of the proposed TCP, its light transmission simulation and experiments are discussed in the paper.
Design and construction of TCP
The construction process of TCPs is different from that of traditional concrete panels due to the existence of the OFs. The spatial arrangement of the OFs and the reinforcement is critical during the whole process. More advanced TCPs will be developed and studied in future work.
Clear distance between OFs
There is a relationship between the clear distance of two neighboring fibers, the diameter of the used OFs, and the intended volume ratio (density of the parallel fibers) of the OFs embedded in the TCP, refer to Figure 1. This clear distance affects the constructability of the TCP. When the value of this distance is smaller than the specified maximum aggregate size of the concrete, it would be hard to place the concrete. Moreover, from a load bearing capacity point of view, smaller density and smaller diameter of OFs imply less chances of inborn defects, improving the structural performance. On the other hand, smaller diameter fibers require intense labor and smaller density of OFs in the TCP reduces its light transmission property. Therefore, there should be a balance among the above issues. For the first TCP in this study, concrete without coarse aggregate, i.e., mortar, was initially used to temporarily avoid these issues where 5% volume ratio of OFs with 2 mm diameter were used.
Formwork and construction
Acrylic plates were used to construct the form of the TCP. Holes were drilled into the plates to provide cavities for the OFs. The dimension of the TCP was selected as 30.5 cm × 30.5 cm, and its depth was 7.6 cm. The clear distance between the OFs was 8 mm leading to 1,600 holes in the TCP; refer to Figure 2(a). After the holes were drilled in the two large acrylic plates, OFs were inserted into the holes in these two plates. Figure 2(b) shows the finished form. In order to reinforce the TCP, steel wire mesh was placed in the middle of the panel. Upon completion of the form, common mortar mix was placed in the form, Figure 2(c).
The panel was placed into the curing room for 7 days before removing the form, Figure 3(a). The panel looks similar to traditional concrete panels at first sight. However, when placed in daylight or other light source, its light transmission performance is easily identified, Figure 3
Light transmission analysis of the optical fibers
Due to the OFs, the TCP has the ability to transmit light. Therefore, understanding the light transmission properties of the OF is a prerequisite for analyzing the light transmission properties of the whole TCP.
Working mechanism of OFs
In TCP, OFs are used for light transmission. The target is transmitting and guiding the light where OFs behaved as light conduits. Understanding the characteristics of different fiber types is useful in understanding the applications for which they are used (Alwayn, 2004;Goff, 2002;Industrial Fiber Optics, Inc, 2002). There are three basic types of OFs: multimode graded-index fiber, multimode step-index fiber and single-mode step-index fibers. Only large diameter single mode OFs were used in this study because of its low cost and availability from the market. OFs provide a method of transmitting light through long thin fibers of glass or plastic by a phenomenon known as total internal reflection, refer to Figure 4. It achieves this by essentially having a perfect mirror coating the outside of an internal transparent core. The light enters one end of the OF and hits the inside of the outer cladding which has a lower index of refraction. For incidence angle between the boundary and the light ray less than the critical angle, the light is reflected back into the fiber. This is referred to as total internal reflection (TIR) and thus light travels down the length of the fiber, i.e., the fiber does not have to be straight.
Acceptance angle and numerical aperture
Numerical aperture is a parameter that is often used to specify the acceptance angle of a fiber (Hui & O'Sullivan, 2009). Figure 5 shows an axial cross-section of a step-index fiber and a light ray that is coupled into the fiber left cross-section, where n 1 , n 2 , are refractive indices of the fiber core, and cladding, respectively. For the light to be coupled into the guided mode in the fiber, total internal reflection has to occur inside the core requiring θ i >θ c , as shown in Figure 5, where θ c ¼ arcsin n 2 =n 1 ð Þ is the critical angle of the core-cladding interface. With this requirement on θ i , there is a corresponding requirement on incidence angle θ a at the fiber end surface making use of Snell's law and elementary trigonometry, i.e., n 0 sin θ a ¼ n 1 sin θ 1 ¼ n 1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi If the total internal reflection occurs at the core-cladding interface, we have θ i >θ c , i.e., sin θ i > sin θ c ¼ n 2 =n 1 . This requires the incidence angle θ a to satisfy the following condition [18]: where the definition of numerical aperture (NA) is NA ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n 2 1 À n 2 2 q . The light, entering an OF within a cone defined by the half-acceptance angle, θ a , is converted into guided modes and can propagate along the fiber. Outside this cone, the light coupled into a fiber will radiate into the cladding. Similarly, the light exiting a fiber has a divergence angle defined by the numerical aperture. This property is critical for the fiber for capturing daylight.
Light transmission simulation of OFs
The detailed information of the OFs in the TCP is listed in Table 1. Solid models of straight and bent fiber were created with TracePro software (TracePro 72, 2012). In this section, light transmission analysis was conducted for a 2 mm diameter fiber with 12.7 mm length, which is the thickness of the wood panels, discussed in Section 4.1.
Straight OF
The ideal spatial arrangement of each OF in the TCP is straight because it will avoid light loss due to bending. Therefore, the light tracing in the OF and its efficiency should be confirmed during light transmission. OF model was created (Figure 6(a)) according to the properties listed in Table 1. The boundary of the light grid source was taken as annular ( Figure 6(b)) because the acceptance cone of the OF is rotationally symmetric, and if the OF is connected to the CPC, the light rays irradiating from the exit aperture will also be rotationally symmetric. In order to simulate the daylight, random distribution of incident rays was selected because it is very close to the daylight distribution (uniform distribution of rays has similar results). One hundred rays in total and 0.01 W per ray were considered to model different incidence angles from 0°to 50°. Since NA = 0.5, Table 1, the half acceptance angle, θ a , is 30°for n 0 ¼ 1:0 (Equation 2). Figure 7 exhibits the light transmission efficiency variation with different incidence angles. This efficiency is defined as the ratio of the light luminance at the entrance section to that at the exit section. It is observed that the light transmission efficiency with all analyzed angles is below 1.0 because of the reflection at the entrance section of the OF regardless of the incidence angle. For incidence angle of 30°, the light transmission efficiency is the lowest at about 0.76. Moreover, for light incidence angles smaller or larger than 30°, the light transmission efficiency increases to 0.96 at 10°and 0.90 at 40°and fluctuates around 0.85 for incidence angles > 40°. This fluctuation is attributed to the random distribution of the incident rays where increasing the incidence angle lead to smaller number of rays generated outside the acceptance cone of the OF and more rays stayed inside the cone and were transmitted. Therefore, the transmission efficiency slightly increased compared to the cases of smaller incidence angles. It is noted that the incidence of daylight rays is random. Although rays come from many different directions, the preferred direction is a uniform extending from many directions from the sun. Also, the daylight and illumination conditions of a given place are different depending on its location, factors as latitude and season influence illumination, the angle of sunrays are not the same for a place located on the equator compared for a place in Northern Europe or Australia. The direction of sunrays will be different in summer and winter. These factors should be considered when analyzing the light transmission properties of the TCP in different locations.
Bending of OF
In the actual construction process, bending disturbance of the OF is unavoidable because of the vibration, handling, and other unexpected conductions. As such, mechanical damage and optical loss occur. These effects are discussed in this section.
3.2.2.1. Mechanical stress analysis. This analysis is similar to the case of steel reinforcing bar where the mechanical bending analysis is related to bending radius and bar diameter. In this case, the deflection of the OF is the main factor in the bending mechanism, Figure 8(a). From elementary mechanics, the bending stress can be calculated as follows: where σ b is the bending stress, E is Yong's modulus of the OF, r is the radius of the OF, and R is the bending radius of the OF. According to Equation (3), the relationship between the bending radius and the bending stress is shown in Figure 8(b) where the horizontal line is the tensile strength of the OF being studied, r ¼ 1:0mm and E ¼ 2:5 Â 10 9 MPa. The bending stress will be lower than the tensile stress of the OF if the bending radius is larger than 12 mm, a suggested lower limit of the bending radius is 20 mm corresponding to a safety factor of 20/12 = 1.67 for structural safety of the OFs. In practice, the long-term service conditions should be considered as discussed in literature (Galesemann & Castilone, 2002;Matsui et al., 2010).
Optical analysis.
It is obvious that bending of OFs causes loss of light rays and reduces its light transmission performance. Therefore, minimizing loss due to bending is important in arranging the OFs in the TCP. Bending loss for various wavelengths and bending parameters, e.g., radius of curvature and wrapping turns, have been reported in (Gupta, 2005). It is well-known that loss increases with bending especially for long wavelengths. Research also showed that temperature and presence of protection layer can affect the bending loss (Tangnon et al., 1989;Wang et al., 2005). Different models have been suggested based on fitting experimental results but due to variation of bending loss with radius of curvature, some disagreement between the theoretical modeling and the actual experimental results were reported (Renner, 1992). There are standard specifications (Telecommunications Industry Association, 2009) for small diameter OFs used for communications because their cladding is very large compared to the core. These specifications require that the bending radius should be larger than 15 times the diameter of the OF. This agrees with the numerical analysis results from TracePro software where almost all the rays were transmitted by the fiber with this bending radius limitation as illustrated in Figure 9. Considering the mechanical (Figure 8(b)) and optical (Figure 9) requirements, it is obvious that the optical property controls the bending limit of the OF. The bending radius should be greater than 15 times the diameter (30 mm in this study) of the OF. This should be followed during the manufacturing process of the TCP.
Light transmission performance
After understanding the light transmission mechanism of the OF and its constructability limitation, the next step is to study the light transmission behavior of the TCP. This is conducted in the present section.
Test setup
The existence of the OFs plays a major role in the light transmission behavior of the TCP. Therefore, presence of enclosing materials such as concrete becomes unimportant. For practicality of the study and to focus on the light transmission issue, several hard wood panels (178 mm × 178 mm) were constructed and used instead of concrete panels for ease of construction. A certain number of holes were predrilled for a specific volume ratio, Table 2. OFs with 2 mm and 3 mm diameters were inserted in these holes. Two wood boxes were fabricated to simulate a small room. One box was painted black in the inside and outside while the other box was painted white, Figure 10(a). An incandescent lamp (80 W) was installed in front of each box facing the TCP side (although the panels were made of wood, we continue to refer to these light transmitting panels as TCPs herein) to simulate the sunlight, Figure 10(b). A 76 mm diameter hole was drilled on the opposite side of the TCP for installing the sensor of the light meter, Figure 10.
Results and discussions
Each TCPs was inserted into the front side of each of the two boxes and the transmitted light was observed through the hole in the back side of the box, Figure 11(a). Experimental results are shown in the Figure 11(b). From the results shown in this figure, several observations can be made: (1) independent of the color of the box (white or black), the light transmission ability of the TCP increased with the volume ratio of the OFs, (2) due to the higher reflection index, the light intensity observed from the white box is larger than that from the black box, and (3) panels with lager diameter OFs transmit more light than the ones with smaller diameter OFs.
Light irradiation simulation
The light transmission performance of the TCP in the small box was modeled with TracePro software. Computational results are compared to the experimental results for validation of the computational model and for subsequent optimization of the design of the panel.
Model description
Three models were created with TracePro software. Two of these models represented the black and white wooden boxes. The third model is also a wood box without any painting. For these three models, the reflection indices are 0.2 for the black painting, 0.9 for the white painting, and 0.6 for the case without painting. Due to lack of information related to the angular distribution of the used incandescent lamp in the tests, standard surface light source, namely CREE C450TR3041 as defined in TracePro 72, was adopted instead, refer to Figure 12.
Results and discussions
Because of the difference between the light source of the numerical and the physical models, the computed and measured flux values through the observation hole were expected to be different, Figure 13(a). It is to be noted that the box can be viewed as a light attenuation medium where the light approaching the OF surface in the TCP is transmitted into the box and finally arrives at the observation hole. Due to light reflection and refraction, the light intensity is expected to be reduced. For comparison, a flux reduction ratio is defined as follows, where R is the flux reduction ratio, F s is the surface flux of the TCP, and F w is the flux at the location of the observation hole.
Light transmission efficiency of the modeled three boxes is plotted in Figure 13(b). From these results, the following observations can be made: (1) the internal painting of the box slightly affects the reduction ratio where the higher the reflection index, the smaller the reduction ratio, R. This is observed from both tests and numerical results, (2) the reduction factor of the maximum flux is smaller than that of the average flux and (3) the reduction ratio from the tests is larger than that from the simulation. Several reasons contributed to the differences between the experimental and numerical results. These are: (1) the source of light for the model is different from that for the test, (2) the light distribution on the observation hole is not even, (3) the spatial propagation of the Incandescent lamp used in the tests and the CREE light used in the simulations are different, (4) the lightmeter should be placed right in front of the TCP to test the incident flux. However, in reality, the luminance sensor was placed at a distance (about 10 mm) from the surface of the TCP. Therefore, the measured values are expected to be smaller than the actual values, and (5) rough surfaces of the actual OFs due to the cutting process occurred, Figure 13(c), affecting the light acceptance and scattering [29]. Accordingly, a number of light rays scattered to the interior surfaces of the box. Some of these rays were reflected and then arrived at the observation hole. Therefore, the test results were affected by the color of the box. On the other hand, the numerical model did not take into consideration the surface roughness of the OFs cross-sections. By checking the paths of the rays arriving at the observation hole, most of the light rays directly irradiated from the exit surfaces of the OFs where a small number of light rays were reflected to the internal surfaces of the box.
For the black, no paint, and white boxes, the reflected rays contained approximately 20%, 60%, and 90%, respectively, of the total energy compared to the incident energy. Therefore, due to the rough surfaces of the OFs, a large amount of reflected rays (reflected more than twice) arrived to the observation hole. Accordingly, the measured flux value varied with the reflection index, Figure 13(a). On the other hand, the flux from the numerical model was insensitive to the reflection index because the rays containing full energy arrived to the observation hole with much less reflection due to the smooth cross-section of the modeled OFs.
In summary, the light irradiation simulation demonstrated the need to improve the light transmission experiments. The accuracy of the physical model should be improved in the following respects: 1) The surfaces of the OFs should be smooth using high-quality cutting machine.
2) Reflection index of the interior surfaces of a tested box should be accurately determined.
3) Standard light source should be used in experiments.
4) The light test sensors need to be improved and installed properly for acquiring reliable results.
Integration of compound parabolic concentrator
Due to the small cross-section and limited amount of OFs in a typical TCP, such panels cannot capture light beyond the acceptance cone of the OF. Horizontally arranged OFs in a TCP are efficient to capture the light rays parallel to the OF length, i.e., perpendicular to the OF crosssection, in multi-story buildings. For the TC façade with horizontally distributed OFs, it is ideal for the OF to face the light source (sun) to capture direct light rays. This is because horizontally irradiating rays from the light source can easily enter the acceptance cone. Although horizontal OF can capture all rays, which fall into its light acceptance cone, non-horizontal rays are reflected ones and have lower energy content than direct rays from the sun. An efficient light concentrator, e.g., CPC needs to be integrated with OF to enhance the light concentration.
Working mechanism of CPC
CPC is a cone-shaped solar concentrator analyzed here using the edge-ray principle from research originally conducted in the mid-1960 s, which became fully developed in the 1970s and 1980s (Miñano, 1985(Miñano, , 1986Miñano et al., 1983;Mitschke, 2009;Ries & Rabl, 1994). CPC is a nonimaging (anidolic) concentrator that is almost ideal having the maximum theoretical concentration ratio. Two-dimensional (2D) hollow CPCs [13] follow the similar working mechanism as 3D solid ones. Rays S1 and S2, which are parallel to the edge rays, come to the entrance aperture of the CPC and arrive to the parabolic internal surface. Subsequently, S1 and S2 are reflected to the exit aperture while the edge rays arrive to the exit aperture directly without any reflections. Therefore, the light concentrates from the bigger entrance aperture to the smaller exit aperture, Figure 14. In this figure, T1 and T2 are the exit rays of the entrance edge rays S1 and S2.
The governing equations of the meridian section of a CPC are developed by rotation of axes and translation of origin. A compact parametric form can be presented by making use of the polar equation of the parabola. Figure 14 defined the key variables used in developing the meridian section as given by Equation (5) to Equation (8).
Substituting Equation (5) into Equation (6) and simplifying by making use of Equation (7), h can be expressed as follows, where f is the focal length, a is the radius of the entrance aperture, a 0 is the radius of the exit aperture, θ a is the half-acceptance angle of the CPC, n-n' is the concentrator axis, and h is its total length. The polar coordinates r; φ ð Þ needed to design and plot the CPC are shown in Figure 14 (Winston et al., 2005). In the TCP, h and a 0 are determined by the panel design (thickness and OF diameter). Once θ a is selected, Equations (5) and (7) can be used to determine f and a, respectively. Subsequently, Equation (6) or Equation (8) can be used to obtain the corresponding h. The real CPC herein is 3D; it is defined as the shape swept out when the shape profile of the 2D CPC is rotated about its symmetry axis The maximum concentration ratio, C max , is useful to determine the concentration ability of the CPC to be designed. It is the ratio of the area of the entrance aperture to that of the exit aperture. For a circular aperture, it is defined in Equation (9) for 3D geometry (Chaves, 2008;Winston et al., 2005). It is noted that for hollow CPC in air, the refraction indices n and n 0 are both equal to 1.0. In this case, C max is entirely determined by the half-acceptance angle of the CPC, e.g., if θ a ¼ 30 o , then C max ¼ 4:0. For 2D CPC, the squares in Equation (9) should be dropped.
Introducing CPC in the TCP
There are three key factors to determine the optimal shape of the CPC. For a TC building façade, the total length h is limited by the actual depth of the panel and the exit aperture a 0 is limited by the diameter of the selected OF. Several types of OFs are available with diameters varying from < 2 mm to 20 mm. The third key factor is the half-acceptance angle, θ a , which is related to the total length, h, or the entrance aperture, a, for a known exit aperture radius, a 0 , as given by Equation (6) and (7), respectively. The relationship between h and θ a is plotted in Figure 15 for different values of a 0 by making use of Equations (5) and (6). From this figure, the following observations can be made: (1) the acceptance angle reduces rapidly with the increase of the CPC total length, (2) a CPC with a known length has a larger acceptance angle for a larger exit aperture radius, and (3) a CPC with a known acceptance angle is larger in length for a larger exit aperture radius. From a constructability point of view, a CPC total length of 10-80 mm is acceptable for typical TCP thicknesses. The half-acceptance angle is between 10°and 20°for a 0 ¼ 2 mm taken as the OF diameter, refer to Figure 15. Clearly, larger diameter OF is more ideal where the half-acceptance angle can be easily increased to 40°, Figure 15. It is to be noted that bundles of OFs can be used to accept the light rays concentrated on the exit aperture of the CPC. However, several issues have to be resolved: (1) given the dimensions of the panel, the optimal shape of the CPC should be determined to define the exit aperture for placement and determination of the number of the OFs, (2) the cross-section of the OF bundle is not closed and light will leak out from the spaces between the OFs and the boundary of the exit aperture, and (3) need to define the method of binding the OFs together.
The factor which evaluates the light concentration ability is the maximum concentration ratio defined in Equation (9) and plotted in Figure 16 for a 3D CPC and for n ¼ n 0 ¼ 1:0 in a hollow CPC placed in air. From Figure 16, larger acceptance angle rapidly reduces the concentration ratio. For practical design of a CPC, a concentration ratio in the range 33.2 to 2.4 corresponds to halfacceptance angle in the respective range of 10°-40°. In the future, the CPC design will be optimized for the use in actual TCPs. A series of CPC designs are simulated and shown in Figure 17 for different geometry (variety of the half-acceptance angles) and for two different maximum diameters (entrance aperture), namely, 38 mm (1.5 in.) and 25 mm (1.0 in.) (Commissioner of Building Control, 2004)
Simulation of CPC and OF
In order to verify the sunlight concentration and transmission performance of the CPC with OF, an optimized model was developed with TracePro software as shown in Figure 18. The diameter of the OF (2 mm) determines the diameter of the exit aperture of the CPC. In order to achieve a high concentration ratio with reasonable half-acceptance angle (12.0°), the total length of the CPC is 27.3 mm, Figure 15, for a 2 mm exit aperture diameter similar to the diameter of the used OF in Figure 15. CPC total length variation with the half-acceptance angle for different exit aperture diameter. the TCP. It should be noted that half-acceptance angle of 12.0°is small for practical applications of CPC but used here for illustration purposes and for achieving high value of C max . From Figure 16, the corresponding maximum concentration ratio of the considered 3D CPC in air is The optical properties of the OF are taken to be the same as those in Table 1. Grid source is used in the simulation as shown in Figure 18 indicating the inclined light rays with incidence angles equal to the half-acceptance angle of the CPC. Different number of rays were used to make sure the simulation results are accurate enough. From the simulation results, the luminance distributions of the OF end cross-section (right hand of OF shown in Figure 18) are showed in Figure 19. From this figure, it is clear that including the CPC led to much higher efficiency in light transmission. From results in Figure 19, the following observations can be made: (1) the rays distribution area of the CPC-OF is obviously larger than the one for OF only and (2) the maximum Lux value of 7494 for the CPC-OF (Figure 19(b)) is 6.4 times that (1167 Lux) for the OF only (Figure 19(a)).
Concluding remarks and future extensions
A TCP is constructed using reinforced concrete (RC) and OFs. Daylight transmission is a distinctive property of the panel making it useable for energy-efficient building envelopes (Commissioner of Building Control, 2004). Such envelopes allow light to be transmitted into the rooms while having the potential of better envelope thermal transfer value (Chen et al., 2012). In order to study the light transmission performance of the TCP, the optical mechanism of the OF is analyzed theoretically, and a numerical model is developed to determine its light transmission properties. Constructability of the TCP is addressed considering mechanical and optical requirements and the critical bending radius is established. Small-scale "buildings" made up of wood boxes are constructed to simulate real buildings with TCP façade. The experimental and numerical results are discussed and compared. It should be mentioned that the smoothness of the OF ends are important for its light transmission function. The light capturing performance of the OF is limited because of its small cross-section, small range of light acceptance angles, and limited numbers of OFs in the TCP. Therefore, a CPC is introduced and integrated with the OF. Numerical analysis results demonstrated that this integration is beneficial in improving the light concentration performance of the TCP.
The preliminary TCP developed in this study gives an indication that energy efficiency of a building envelope is a feasible endeavor. CPC integrated with OFs can improve the light capturing ability, while the planar distribution of the CPC with OFs needs to be developed and optimized considering construction and economic constraints. The shape and dimensions of the CPC, either hollow or solid, are critical for the light concentration property of the TCP. Moreover, the design of the CPC can have implications on the loading bearing capacity of the TCP. Therefore, a balance between the optical and structural performances needs to be determined. Available commercial OFs are expensive. Therefore, low-cost light transmission pipe should be found to reduce the cost of the TCP. Finally, light irradiance and thermal insulation experiments with standard light source or under sunlight should be conducted (Kahsay et al., 2019;Littlefair, 1998). | 8,110.2 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Under the Surface of Images: Low-tech images produced by handheld devices for performance practice
‘Under the surface of images’ is a phrase used by Michel Foucault in the book Discipline and Punish when referring to the means in which Surveillance is implicated in the discipline, regulation, and normalisation of the body. Here in this paper is used to understand what can be unveil by low-tech images from two different situations: the first concerns Jazmine Headley’s case that had happen in the United States in 2018. The second is related with my practice as an artist and the use of handheld devices in my live performance This is a Low-Tech Movie . In the paper I propose to discuss the use of technology devices as tools for visibility and emotional memory imprint – as described by the neuroscientist Boris Cyrulnik – and how this knowledge can be used in performance practice. The paper is be divided into two main sections, one related to technology and ways of perception, with articles from Jonathan Crary, Camille Barker, and Boris Cyrulnik and, one to the discussion of the concept of 'low-tech image' or 'poor image' produced by handheld devices and developed in my own performance practice.
INTRODUCTION
'Under the surface of images' is a phrase used by Michel Foucault in the book Discipline and Punish when referring to the means in which Surveillance is implicated in the discipline, regulation, and normalisation of the body: "Under the surface of images, one invests bodies in depth" (Crary 1989). Although Foucault's theories pertinency in the discussions of issues concerning images produced by technology and its impact over the individual, I'll leave Foucault for now to focus on an article by art historian Jonathan Crary, 'Spectacle, Attention, Counter-Memory' (1989).
In the article, Crary questions if the origin of Spectacle as developed by Guy Debord in The Society Of The Spectacle is linked with the bourgeoisie political revolution in the Nineteenth Century where "for the first time, is that observable proof became necessary to demonstrate that happiness had in fact been obtained" (Crary 1989).
The nineteenth century accounts major changes in social representation, in commodity production and in the strings of power relations where "the spectacle then would coincide with the moment when sign-value takes precedence over use-value" (Crary 1989), or the moment that vision gains privilege over functions. This new visibility is a consequence of the shifts in the place of the observer, Crary claims. The spectacle is thus driven by a visual apparatus that goes beyond the construction of an invisible matrix of control, it also gets expanded to the body and modes of perception.
In this paper, it is my intention to understand how the Spectacle, as a visual apparatus, affects body perception, memory, and emotion. More even, I'm interested in how this research can be used in performance art practice.
The article is divided into two main sections: in the first section I discuss ways of perception produced by a visual apparatus, with articles by Jonathan Crary, Boris Cyrulnik and Camille Baker and, in the second section, I present the concept of 'low-tech image' produced by daily handheld devices, developed in my performance art practice.
TO APPEAR
I remember feeling outraged and then having a heavy feeling that there was nothing I could do. The video appeared in my Facebook news feed posted by the New York Times. The framing and the quality of the video drowned me for a few seconds into the uncertainty of what I was seeing but it was the combination of visual misperception with the reading of the caption of the video that glued me to the screen: A video posted to Facebook showed police officers forcefully removing Jazmine Headley's 1-year-old son from her arms at a Brooklyn food stamp office. The episode ignited outrage online against the New York Police Department and the charges against Ms. Headley were later dropped (NYT 2018) I then saw the action clearly ( Figure 1).
The story is daunting. After waiting for several hours to be attended by a social welfare officer, Jazmine Headley, a single mother living in Brooklyn, and her 1-year-old son were seated on the floor of the welfare office. The security was not keen on the action and ordered her to stand. She refused. The security called the police that ended up arresting Jazmine Headley and taking her baby from her. The story could have become another case of abuse of power by authorities and of dehumanising bureaucratic systems, if not by the several witnesses that captured the arrest with their mobile phones and posted online on Facebook.
"It's the story of many other people. My story is the only one that made it to the surface" (Headley to the NYT 2018). Because it 'made it to the surface', Jazmine Headley is now the face of the struggle that the more deprived have endured within the American bureaucratic system. She is an example of the social asymmetries in her country. To see the video is to see that people like Jazmine exist. I would also argue that the video shows much more than Jazmine's story. It brings to the surface an absurd situation being taken beyond reason but, at the same time, shows the lack of empathy within a system in which we live in and how alienated workers can be from their own humanity.
It is curious however that this case can also be a gap in the system, i.e., it opens a window to acknowledge the several layers in the system. The exposure of the video gives rise to certain images that were under its surface thus making clear its deepest and ugliest layer where horror is kept hidden. "For Ms. Headley, the outpouring of sympathy that followed publicity of the arrest has brought her some relief. Her public assistance benefits have been restored" (NYT 2018). This outcome is not just a direct consequence of the action being captured on video but in its streams in social networks: "the Facebook videos have been seen more than 1.3 million times" (NYT 2018). What allured viewers to the video, I advocate, was that the action was presented live, and perceived as a real event.
I saw the video one month after the conflict had happened. I knew its narrative -it was written in the post, alongside with video -and I strongly engaged in it. After seeing it, I could not forget it.
Imprinting emotion
In the article 'Emotion and Trauma' (2013), neuroscientist Boris Cyrulnik describes that a traumatic event impacts memory not for the event itself but because of the psychological effect. It needs to arise in the "neurological circuits that leave a memory potential 'like a scar in brain tissue' (William James) or rather like a path carved out of neuronal undergrowth" (Cyrulnik 2013). To study the effects of trauma in the memory, Cyrulnik proposes to differentiate emotion (caused by cerebral stimuli) and feeling (emotion triggered by a mental representation). Emotion works connecting the orbitofrontal cortex to the amygdala and anterior cingulate cortex. The orbitofrontal cortex is responsible for modulating "the affective connotation of events" (Cyrulnik 2013). If the orbitofrontal cortex gets damaged, it won't be able to control the amygdala and by consequence ends up overstimulating the anterior cingulate cortex to the point that "the slightest event triggers uncontrollable emotion" (Cyrulnik 2013) thus making "the motor expression of emotion" uncontrollable as well.
When the overworked amygdala becomes hypertrophic, any feeling that arises from a conflict situation becomes the source of violent stress. It is also the cause for variations of emotion imprinted in the memory: Amygdala response is what determines whether an item of information is stored in memory. An alert amygdala ensures that some facts will become memory events. A numbed and lesioned amygdala lets nothing through to memory (Cyrulnik 2013).
A balanced circuit that travels from the orbitofrontal cortex to amygdala, regulating the anterior cingulate cortex, allows emotion to be imprinted in the brain but with amygdala enlarged [hypertrophic] due to disturbance occurring within the circuit, can provoke memory loss and even indifference. And the more deprived of a sensorial development of emotions and sensations the more the circuit fails to establish connections between the frontal cortex and the anterior cortex.
The sensorial development is traced by Boris Cyrulnik since birth with new-borns learning to read parents and caretakers emotions by body contact. At the core of his research is the Attachment Figure, an 'external base' of security to babies when relating to the world. A sensory figure featuring at a sensitive moment in a child's development becomes a key object that the child perceives over and above any other. From that point onwards a circuit is traced in its implicit memory, attaching the child to the familiar figure (...) Without such a figure the child panics (…) and is unable to process information correctly. (Cyrulnik 2013) The Attachment Figure gives shelter when the baby experiences fear or a traumatic event. If such a figure fails to exist, the emotion that would allow the baby to gain confidence (an imprint) fails to be retained in the memory creating uncertainty and new information can be overwhelmed and experienced as traumatic. Memory no longer retains an imprint and the interaction with others can be perceived as uncertain once there's no storage of the emotion in the memory.
The result of amygdala enlarged [hypertrophic] is the inability to enjoy life, be indifferent to pain and to lose the ability to feel empathy. Cyrulnik alerts that even the "minor frustrations that are inevitable in daily life (e.g. a delayed feed or temporary absence of its mother for an infant, physical malaise or a relationship issue) cause minor levels of discomfort or distress that train us in empathy" (Cyrulnik 2013).
Going back to Jazmine Headley I would propose that the videos broadcasted live and disseminated in social media would create an emotional response in me -they unveiled what I did not wanted to see in a space (my Facebook feed) that is tailored to my interests. This emotional response occurs, I argue, because of aspects of video settings such as how it is filmed, who filmed it, how it was broadcasted and more importantly to its effects of being authentic.
The spectacle of horror
"We do not need to experience horror in order to strengthen our memory of horrific images. People only have to show one series of horrific images (...) horror has a fascination that fixes memory" (Cyrulnik 2013). We acknowledge the fascination of horror by the Spectacle hence it lives in our daily routines; we only need to think about televised news to understand what this observation is about. What is interesting though is to acknowledge that horror by being recorded, reproduced and repeated will travel more efficiently to memory thus affecting perception and feelings.
Isn't this the effect promoted by the Spectacle? I return to Jonathan Crary's article 'Spectacle, Attention, Counter-Memory' and his questioning if late modernism could be in the origins of the Spectacle -promoted by T.J. Clark in his book The Painting of Modern Life. With references to modernism being the antecedent of the Spectacle, Crary criticises Clark for positioning the spectacle as a "form of domination imposed onto a population or individual from without" and "an equivalent for consumer society" (Crary 1989) with an impact mainly over the social structures of society.
Crary argues that Clark disregards the possibility that Spectacle can have an impact on the reorganisation of the subject (body, attention, and memory) and ultimately on "the construction of an observer who was a precondition for the transformation of everyday life" (Crary 1989).
When researching Guy Debord's post-writings of The Society of The Spectacle, Crary introduced new data regarding the arising of the Spectacle: he claims that it had started in 1927. There were three events to consider that year; all of them related to the introduction of television for the masses 1) Vladimir Zworykin's invention for television function 2) the incorporation of synchronised sound in cinema (with the movie The Jazz Singer) and, the rise of authoritarian regimes with the development of televised propaganda.
A feature of television is that it always incorporated sound and image and when the sound was, for the first time, synchronised with moving images it provoked an overlapping of stimuli in such a manner that a new kind of attention was demanded from the observer. This problematic of attention is referred by Walter Benjamin, also in 1927, in the Arcades Project. For Benjamin, the main concerns were in perception being driven by the increasing use of technology. It provoked a crisis because "of a sweeping remaking of the observer by a calculated technology of the individual, derived from new knowledge of the body" (Crary 1989).
In fact, Crary presents Benjamin's 'standardised and denatured of perception of the masses' in close relation with Henri Bergson's Matter and Memory. According to Crary, Bergson claims that attention "could become transformed into something productive only when it was linked to the deeper activity of memory" (ibid). However, if there is an impoverishment of the memory (provoked by the overlapping of stimulus) and an inhibition of representation of an object then there's a possibility for 'hierarchies of power formations' to happen, that is, images invading the space of perception and being privileged over perception. It also implicates a debilitation of the sensorial development of emotions that can bring deficiencies in the function of emotional memory, described before by Boris Cyrulnik.
By forcing such formation of 'hierarchies of power' among body and senses, a sound and image apparatus (such as the television) can rapidly become an instrument of social control, used firstly by authoritarian regimes and secondly when "corporate power sought home viewing, for maximization of profit" (Crary 1989).
More, towards the end of the article Crary stresses that "television is a further perfecting of panoptic technology" (Crary 1989), opposing Michel Foucault's thesis about society being one of Surveillance and not of Spectacle: "Surveillance and spectacle are not opposed but collapsed onto one another in a more effective disciplinary apparatus" (Crary 1989). Its effectiveness will improve over time by adding more sophisticated technology. What can happen is that this apparatus stops from being detected, at a stage where it disappears and becomes invisible.
Television entered the intimacy of homes and, once in every household, it was removed from its condition of a device and gradually was transformed into a companion, even holding a place within families.
Regarding horror, television perpetuated the idea of it, not just by showing it but also by creating an environment that intentionally affected the emotion of the observer (live coverage of an event, for example). Thus, there is no need to see a full image of horror, but having the apparatus in motion, all it takes is a suggestion of horror to subsequently recognise it.
Nevertheless, television presents ways of production that are external to the observer. When handheld devices (I'm referring mainly to laptops and mobile phones) became part of everyday life, it brought a closeness to the body that appealed to the intimacy and reflects how we see the world.
Camille Baker in the book New Directions in Mobile
Media and Performance (2019) describes that a mobile phone has "became beloved for its innate encouragement of spontaneity, and the speed of thought. It enables (...) mixing of ideas and emotion" (Baker 2019) How can everyday technology (handheld devices) play a role in imprinting emotion in the memory? Also, how can I incorporate such knowledge in my performance practice?
HANDHELD EMOTION
Camille Baker states that mobile phones offer their users the possibility to engage deeply with the world. The users are able to register, collect and share a personal vision of the world as they "encounter it -the mobile phone becomes the 'window of the world' (...) returning to observing the world in order to capture it, and then finding details that they might not have otherwise noticed" (Baker 2019).
Mobile phones bring immediacy and sharing to the way we communicate. Therefore, what is seen is captured by the lens of the mobile phone and converted into a moment of sharing with a community. At the same time that it is shaped as a fragmentary system (my mobile phone is adjusted to my own needs and wants), mobile phones are connected to a digital community: there is a constant need to reach the other either by expecting a 'like', a comment on a post or just by having an impact on the stats. But is the sharing of information bringing an emotional bond with the other?
Camille Baker refers to the importance of the Attachment Figure in babies to strengthen the emotional connection with the other (discussed here in this article). Following this theory, she agrees that physical contact is important for our health but adds the significant role that online communication plays in human interaction: "Friends have reported to me that they can feel the sensation of transmitting or receiving warmth from each other, through emails, text messages, and Skype calls" (Baker 2019), even though this type of connection is just a fraction of the experience or emotion provoked by physical encounter, it can offer different options to long-distance connection with the people we care about.
Can this fraction of emotion be a stimulus that can travel all the way to the memory? I would argue, yes but also be aware that it is a fraction -it may get lost in context or narrative.
The fraction of the emotion that is represented by a text message, or a Skype call, can have a strong impact on the memory. I can think of two or three friends that were disturbed by the lack of 'likes' in their comments received on their social media accounts. There is no denying that the demand of communication nowadays is intrinsically dependent on these devices.
However, one can argue that iPhones and smartphones don't give enough space for users to experience loneliness or even that it can close the spectrum of sensorial experience that allows emotional stimulus to travel to the memory. Thus, to function smartphones and iPhones demand connection to a WI-FI network. More even, mobile phones push us into the virtual world and into the Spectacle. Still, I defend that by their portability, the effect of immediacy and low-cost production, they can be a means to transform citizens into vigilantes of democracy, producing material such as Jazmine Headley's videos.
I have implied before in this article that the strong emotion provoked by Jazmine Headley's video was not just because of its content but also for the quality of the video and its fast dissemination on the Internet.
The quality of the image in the video is poor, grainy, and sometimes blurry, with exaggerated zooms. The framing is vertical (thus an indicator of a mobile phone capture), and the camera moves constantly (Figure 1). It is far from being shot perfectly. Its true utility is to provoke feelings. We can refer to it as being a video-witness that puts the action into the now -it screams, "this is happening now!" And more importantly, I believe, it imprints an idea of real, of an experience of a real-life event and therefore they are acknowledged as authentic.
Camille Barker described this sense of a real-life momentum captured by the mobile phone lens: The tension of the poor image quality (...) imparted a rawness to the video's medium and authenticity a sense of 'realness' or 'liveness' making it seem more personal. This adds uniqueness and [an] ephemeral quality, distinguishing it from any other, and making it feel special to the observer who has coexperience the moment (Baker 2019) The videos like the ones of Jazmine Headley being arrested have in themselves a sense of authenticity and of urgency. The observer recognises it as being close to everyday life. As I see it, I do not question its authenticity.
This may happen because there are memories that were shaped by technology. I'm referring to the way we access intimate memories (family memories, for example) through technology -a previous experience of a specific type of image that brings forward the aesthetic of family photos, family videos, and amateur videos. It is with this experience already imprinted in the memory that the representation of the object activates our feelings causing the medium to vanish, to become invisible and the observer to be overwhelmed with an emotion.
The aesthetic of raw image captured by the mobile phone can be transformed into a powerful medium to evoke feelings. Within such aesthetic, a moment that gets captured in a video or in a photo it is not just a token of the moment that is being produced but also a link to the emotional memory of the moment I have been working with this aesthetic in my own performance practice and I do believe it can bring a valid contribution to performance studies.
A low-tech image for performance art
The transformations that digital media brought to the world are utterly mirrored in art. When I started experimenting with low-tech image in my performance practice, I came across several arguments about the impact that the everyday use of technology was having on the ontology of performance art. Steve Dixon, in the book Digital Performance (2007), describes such a tension between performance and technology.
Figure 2: Still from the performance 'This is a Low-Tech Movie' by Eunice Gonçalves Duarte.
Dixon is alert to the negative connotation that was given to the 'virtual' (virtual reality, virtual space and so on) claiming that it has been associated with the notion of 'fake' and 'illusion': "The artificiality or falsehood of digital images has, therefore, limited appeal to many live artists on aesthetic, ideological and political grounds" (Dixon 2007).
The use of digital technologies presents some problems to the ontology of performance and the major criticism of digital performance draws on the issue of 'presence'. Performance is an art of the body, of the physicality of the actor's presence, also in theatrical performance actor and audience are united by a shared physical space.
Dixon points out that such disagreement begins with the concept of performance undertaken by Jerzy Grotowski's poor theatre. With the poor theatre, it is intended to remove everything that is superfluous to the action, narrowing theatre performance to the actor-spectator rapport: "For Grotowski's actors, the via negativa also involved a rigorous stripping away of bodily conditioning and psychological resistances in order to approach pure, animal bodily impulses, and good oldfashioned 'spiritual truth'" (Dixon 2007). To resolve the conflict, Dixon proposes the via positiva where digital performance brings an addition to what already exists. I would suggest another connection: the concept of poor image promoted by Hito Steyerl in the manifesto 'In Defense of the Poor Image' (2009). Steyerl describes the poor image to be of bad quality: "It is a ghost of an image, a preview, a thumbnail, an errant idea, an itinerant image distributed for free, squeezed through slow digital connections, compressed, reproduced, ripped, remixed, as well as copied and pasted into other channels of distribution" (ibid.).
Steyerl presents the poor image in a close connection between image and viewer. Poor image is presented as a proletarian element that disturbs the hierarchy of images by being in constant change (every token of the image is itself the original image). It can only exist within digital media. In short, it is an image that challenges the Spectacle, creating entropy within the forward flow of the capitalist system (by producing multiples from multiples).
Camille Baker shares a similar opinion in her book but canalizes this urgency to finding new experiences for the mapping of performance artists working with mobile media. Baker states that mobile phones are being used by artists "as a nonverbal visual communication tool (...) to express 'emotion' in the form of media files that represent an interpersonal felt connection over distance" (Baker 2019, p.17).
This is a Low-Tech Movie
Since 2010, I have been working in the concept of producing low-quality images. The low-tech image is a poor image that is produced in a real-time (or simulates real-time). The image is captured by webcams and mobile phones, exploring the relationship with the physical space, the body and with objects. It plays with the limitations of the medium and of the light available on the set. It is not recorded previously to the performance, and only exists for performance purposes. It is ephemeral. Once the performance ends the video disappears.
Each image has a familiarity to it, in a way it brings comfort to the audience because they recognise the images from their own experience, causing the sensation of having seen it somewhere before. It is the image of home videos that quickly capture the moments of everyday life. This type of image does not intend to compete with Cinema, nor television, but attempts to compose an image of affections and memories.
My most recent performance piece This is a Low-Tech Movie -a performance-installation that attempts to film a movie in real-time -tells the story of three women that for social and political reasons left Portugal to find refuge in other European countries (Figure 2). This performance incorporates the concept of lowtech image. To create the performance piece, I collected several stories of women that in turmoil situations decided to leave Portugal to find refuge in another European country. From my story collection I have created three new stories and connected it to my family stories.
During the live performance I tell the three stories using family photos, old postcards and elements of nature in micro settings that are displayed over tables (Figure 3). The audience is with me and sees the manipulation of light and webcams to capture action in real-time. There is no editing. All is performed live.
Even though the stories departed from a real situation in someone's life, they are fiction. Nevertheless, I was surprised by the response from the audience to it, they refused to accept it as fiction and imprinted authenticity in the images that were produced in performance.
My hypothesis for such reception is that the medium may have taken over the senses and connected the images with the emotional memory. Therefore, the brain may be identifying a visual narrative that can relate to past experiences. "I can only put in memory what you contribute to it -your emotions, your anger, your smiles. In short, our connection. My autobiographical memory only consists of what you've put into it; it is a relational memory" Cyrulnik says in an interview to the International Review of the Red Cross (2009).
The emotional exchange that happens in This is Low-tech Movie is mediated through images, but this is already something that performance practice do. Nonetheless, Jazmine Headlye's videos helped me to understand that by perceiving her videos as authentic, as a real person in a real time situation, she become present to me. Therefore, to appear in such manners is to be made present.
In my performance practice, I dwell in the gaps of the non-existing image. I consider my concept of low-tech image inscribed into a militancy of bringing women their legitimate visibility. I produce an image that can compensate for the lack of visual material of women taking part in historical moments.
I believe my practice contributes towards the understanding of the relation between performance -technology -society, strongly bound in an image that is an expression of emotion. With it, I expect to explore the technological aspects of Spectacle and use technology as a way of counter-spectacle i.e. using the same devices that produce 'the visible' and show the invisible fringes of art and of society. | 6,468 | 2020-07-01T00:00:00.000 | [
"Art",
"Computer Science"
] |
Supersymmetry and Wrapped Branes in Microstate Geometries
We consider the supergravity back-reaction of M2 branes wrapping around the space-time cycles in 1/8-BPS microstate geometries. We show that such brane wrappings will generically break all the supersymmetries. In particular, all the supersymmetries will be broken if there are such wrapped branes but the net charge of the wrapped branes is zero. We show that if M2 branes wrap a single cycle, or if they wrap a several of co-linear cycles with the same orientation, then the solution will be 1/16-BPS, having two supersymmetries. We comment on how these results relate to using W-branes to understand the microstate structure of 1/8-BPS black holes.
Introduction
There are very large families of solitonic solutions to supergravity that appear, from far away, to be like black holes and yet cap off in smooth geometry as one approaches the core of the solution [1][2][3]. The vast majority of the known solitons are supersymmetric and, for those that have the charge and angular momenta corresponding to black holes with macroscopic horizons, the core of the solution asymptotes to an arbitrarily long, possibly rotating, AdS p ×S q throat [4,5]. The geometry then caps off just above where the black-hole horizon would be. Classically, the depth of the throat, and hence the red-shift between the cap and infinity, is a freely choosable parameter determined by the moduli of the soliton. While the capping-off was long known to be the result of a geometric transition in which non-trivial homology cycles blow up, supported by magnetic flux, it was only relatively recently that it was shown that the non-trivial topology is an absolutely essential ingredient if one is to have smooth solitons in supergravity [6].
One of the primary motivations for studying such supergravity solitons is the microstate geometry program in which the solitonic geometries are related to the microstate structure of black holes with the same asymptotic charges. For BPS black holes with AdS throats, this can be made very precise using holographic field theory and, in particular, IIB microstate geometries, and fluctuations around them, can be mapped directly onto states in the D1-D5 CFT. (For some recent results, see [7][8][9][10][11].) Deep, scaling microstate geometries can access states in the lowest-energy sectors of the D1-D5 CFT that contain much of the microstate structure. Indeed, once the classical moduli space of these geometries is quantized, the depth of the throat and the redshift are limited in precisely such a manner as to holographically reproduce the energy gap of the maximally twisted sector of the CFT [4,5,12,13].
A major focus of the microstate geometry program has been to study of the BPS fluctuation spectrum in supergravity and to determine the dual CFT states. The goal of this work has been to see the extent to which such fluctuations can sample the microstate structure and perhaps give a semi-classical description of the black-hole thermodynamics. While there has been a lot of progress on this in recent years [7][8][9][10][11], it remains unclear as to whether such techniques will yet give a semi-classical description of the entropy. The challenge that remains is how supergravity can be used to access a rich variety of states within the highly twisted sectors of the CFT. There are quite a few ideas as to how this might be achieved but, as present, the required computations are extremely challenging.
Independent of the fluctuation story, there is the broader application of microstate geometries as backgrounds on which to study string theory and to see if one can use intrinsically stringy excitations of these geometries to access the microstate structure of black holes. One of the most interesting and promising approaches to this is the study of branes wrapping the non-trivial space-time homology cycles that support the microstate geometry.
Perhaps the first motivation for studying such wrapped branes is the black-hole 'deconstruction' story [14,15]. This starts with a fully-back-reacted geometry consisting of a D6-D6 pair with fluxes that induce with D4-D2-D0 charges. The scaling geometry is then AdS 3 × S 2 . A gas of D0 branes is added to this background and the bubble equations, or integrability conditions, localize the D0 branes at the equator of the S 2 . The back-reacted D0 brane gas is, of course, an extremely singular family of solutions, however, it is then argued that the Myers effect [16] causes polarization of the D0 branes into D2 branes wrapping the S 2 . If this configuration is lifted to M-theory then the D0-charge becomes momentum and the underlying configuration can be mapped onto the MSW string [17] and can thus carry the entropy of a BPS black hole.
The Myers effect can only produce dielectric D2 branes and so it is something of a 'stretch' to get to non-trivial wrapped branes on the S 2 . There are also issues of tadpoles and supersymmetry breaking that we will discuss in more detail below.
Another approach that points to importance of branes wrapping cycles of microstate geometries comes from quiver quantum mechanics [18][19][20][21]. The underlying supergravity background is once again a scaling geometry constructed from fluxed D6 branes. In terms of the quantum mechanics on the brane, it is shown in [12,[21][22][23][24] that the exponentially growing number of states of a black hole can arise only on the Higgs branch of the quiver. The states on the Higgs branch are described by open strings stretched between the D6 branes, and in M-theory this corresponds to M2 branes wrapping the non-trivial cycles of the corresponding scaling geometry.
Much more recent work [25] by Martinec and Niehoff provides a new framework that gives new insights into deconstruction and the ground-state structure of quiver quantum mechanics. One of the key observations in [25] is that if branes wrap cycles in scaling microstate geometries, then they become very light, or give rise to either massless particles or tensionless strings or branes in the extreme scaling limit. It is also suggested that such "W-branes" could condense and give rise to new phases with an exponentially growing number of BPS states in those new phases. In this way one can imagine the W-brane degrees of freedom giving rise to the Higgs branch of quiver quantum mechanics.
The apparent problem with this idea is that, naively, there appear to be rather few ways that branes can wrap cycles in a microstate geometry. However, as in the deconstruction story, Martinec and Niehoff point out that within the compactified directions, the wrapped branes behave as point particles in the magnetic fluxes that thread the compact directions and so there is a vast number of distinct W-branes coming from the degeneracy of lowest Landau level. They then argue that the W-brane degeneracy is given by counting walks on the quiver and show how this reproduces the counting formulae derived in [21][22][23][24] and hence the exponentially growing number of states. This is an extremely appealing picture in that it not only meshes well with the ideas and results coming from deconstruction and from quiver quantum mechanics but it also provides a semi-classical description of solitonic states, within a microstate geometry, whose massless limit may well account for the black-hole entropy. The picture is also beautifully reminiscent of how the E 8 × E 8 W-bosons emerge from the geometry of K3 in heterotic-type II duality [26]. The W-brane picture exemplifies how microstate geometries can provide a background to describe intrinsically stringy states that are part of the microstate structure of black holes.
Whatever the perspective, there are compelling reasons to study branes wrapping cycles in microstate geometries and, in particular, M2 branes wrapping 2-cycles in the original fivedimensional microstate geometries [1][2][3] in M-theory.
Up until recently, W-branes have been treated as probes in the background of microstate geometries. However, if there are sufficiently many wrapped branes then there will be a significant back-reaction and this will require treatment within supergravity. It is therefore possible that supergravity may, in fact, see some large-scale, coherent aspects of W-branes. Perhaps the simplest approach to investigating such wrapped branes in supergravity is to start with the T 6 compactification of M-theory and consider an M2 brane wrapping an S 2 in a five-dimensional microstate geometry. Such a brane is point-like in the T 6 but one can simplify the problem by smearing over the entire torus and reducing the problem to five-dimensional supergravity. The smearing also avoids the problem of how to handle the electric field lines on the compactification manifold since it forces all the electric field lines into the space-time. In five dimensions, the four-form field strength sourced by such a wrapped brane is dual to a scalar field and the relevant five-dimensional field theory is N = 2 supergravity coupled to both vector multiplets and hypermultiplets. Thus far, the study of microstate geometries from the five-dimensional perspective has largely focussed on N = 2 supergravity coupled only to vector multiplets; the addition of hypermultiplets add a whole new level of complexity but is required if one is to study wrapped brane states in five dimensions. This has therefore been the starting point for studying the supergravity back-reaction of wrapped branes [27][28][29].
There is a potential problem with smearing the brane source over the compactification manifold: such a source has spatial co-dimension equal to 2 and so, in flat space, would involve logarithmic Green functions. In asymptotically-flat backgrounds this will generically lead to singularities at infinity. One can avoid this issue by taking the background to be AdS 3 × S 2 . Indeed, if one takes a bubbled microstate geometry and removes the constants in the harmonic functions that lead to asymptotically flatness at infinity, one typically gets a solution that is asymptotically AdS × S. This means that AdS 3 × S 2 is an excellent local model of a single bubble within a scaling microstate geometry. If the total W-brane charge is non-zero then restoring the asymptotically flat regions will still lead to problems. In an asymptotically flat solution one can interpret the configuration in terms of branes, specifically in the IIA formulation, the wrapped M2 W-branes become fundamental strings ending on the D6 that is wrapped on a T 6 . This leaves an uncanceled tadpole on the compact D6 world-volume. This issue is presumably related to divergences at infinity in the supergravity solution. The easiest way to handle these problems in flat space is to consider a solution with multiple bubbles and try to wrap branes in such a manner that there is no net W-brane charge, leading to a dipolar charge distribution whose fields fall off faster at infinity in supergravity and have no tadpoles on other wrapped branes. We will see how this is completely incompatible with supersymmetry.
Wrapped M2 branes in AdS 3 × S 2 were studied in great detail in [27][28][29], where some very interesting new families of BPS solutions were found. These new families were shown to preserve 4 supersymmetries but it was not clear how those supersymmetries would be modified in an asymptotically flat background and how the supersymmetries might depend upon the orientation of one bubble relative to another. It is the purpose of this paper to resolve precisely these issues.
First and foremost, if a solution is asymptotic to AdS ×S or, equivalently, the dual field theory is superconformal, then the supersymmetries come in two classes: Poincaré and superconformal. Breaking the conformal invariance can only preserve the Poincaré supersymmetries. For a black hole in flat space, the superconformal symmetry present in the near-horizon limit is broken by the flat asymptotics. In the standard lexicon, when we say that a 1 8 -BPS black hole has four supersymmetries, this means four Poincaré supersymmetries. Moreover, any microstate of such a black hole, and thus any corresponding microstate geometry, must also preserve the same four Poincaré supersymmetries. To understand whether the solutions of [27][28][29] lie in the ensemble of 1 8 -BPS black hole microstates therefore rests on understanding how many of the four supersymmetries identified in [27][28][29] are Poincaré or superconformal supersymmetries and whether these solutions, in fact, break the Poincaré supersymmetries of the 1 8 -BPS black hole. By analyzing the supersymmetries when these solutions are coupled to flat space, we will show that two of the four supersymmetries are broken by that coupling, which means that there are only two Poincaré supersymmetries and thus the solutions of [27][28][29] are 1 16 -BPS states. To understand the supersymmetry in more detail, one can study the supersymmetry projectors that arise from the corresponding brane configuartions. A priori there are two possible ways in which the supersymmetries may work out. The first, and most obvious, comes from the standard application of brane projectors: A stack of M2 branes lying in directions 0, 1, 2 impose the following constraint on supersymmetries (see, for example, [30,31]): (1.1) and thus, typically, cut the amount of supersymmetry in half compared to the amount of supersymmetry without the M2 branes. In particular, applying one such projector to a standard 1 8 -BPS microstate geometry would result in a 1 16 -BPS background. If there were multiple wrapped branes on cycles with different orientations in the four-dimensional spatial base, one would have to impose at least one other projector of the form Γ 0xy , and since this does not commute with (1.1), all supersymmetry would be broken.
However, the projector (1.1) is based upon rather simple brane configurations and is possible that it is modified in a suitably complicated background. For example, the dipolar distributions of M5 charge that underlie black rings and microstate geometries neither modifies nor places further conditions on the supersymmetries apart from the projections required by the original electric M2-brane charges. This works through a remarkable conspiracy between the M5-charge density and the angular-momentum density so that their combined supersymmetry projections reduce to those of the underlying M2-brane charges [32,33]. Thus microstate geometries exhibit the kind of geometric transitions that allow densities of new brane charges in precisely such a way that the original supersymmetries remain unbroken. However, such remarkable conspiracies can usually be detected by brane probes. Brane wrapping of black-hole geometries was also analyzed in detail in [34], where it was shown, using brane probes, that branes that wrap black holes in asymptotically flat geometries generically break all the supersymmetry.
Here we will show that wrapping M2 branes on a space-time 2-cycle does indeed reduce the supersymmetry in exactly the manner that the naive brane-projector analysis suggests. In particular, we will show that two of the four supersymmetries found in [29] are artefacts of the superconformal symmetry and will be lost as soon as the configuration is embedded in an asymptotically-flat space-time. Indeed, for 1 8 -BPS microstate geometries that are asymptotic to flat space necessarily impose the supersymmetry projection condition: where 1, 2, 3, 4 are the non-compact spatial directions (see, for example [4]) and the Γ's are eleven-dimensional gamma-matrices. This condition arises from the fact that the configuration must carry three M2-brane charges. Put differently, to represent a microstate of a black hole, the microstate geometry must have the same supersymmetry of the black hole. For five-dimensional black holes, where the supersymmetries are symplectic Majorana, the projection condition (1.2) may be written: where γ 0 is a five-dimensional gamma-matrix. Using the standard identity for the product of gamma matrices, this may be re-written as (1.2). We show precisely how imposing this projection condition on the supersymmetries of [29], cuts their number in half, which means that, once embedded in flat space, these solutions are actually 1 16 -BPS microstates. Going further, we relate the computations in [29] to their flat-space analogs, and show how the projection conditions imposed in [29] are precisely of the form (1.1).
Thus we conclude that for a microstate geometry in flat space (where conformal invariance is broken), wrapping branes around a single cycle will reduce the usual 1 8 -BPS, asymptotically flat, microstate geometries, with four (Poincaré) supersymmetries, to 1 16 -BPS microstate geometries, with just two (Poincaré) supersymmetries. Furthermore, the supersymmetry projection that plays an integral role in [29] actually depends upon the orientation of the wrapped cycle and if cycles have different orientations then the corresponding supersymmmetry projectors will be incompatible. This means that wrapping more than one cycle with generic orientations will, in fact, break all the supersymmetries. We will also show that there is no way to solve the tadpole problem without breaking all the supersymmetries and so we recover the result expected from brane-probe analysis [34].
In Section 2 give some of the relevant details of N = 2 supergravity in five dimensions coupled to vector multiplets and hypermultiplets. In Section 3 we set the hypermultiplets to zero and review (very briefly) the standard formulation of bubbled microstate geometries using a Gibbons-Hawking (GH) base for the spatial sections of the manifold and then discuss how AdS 3 × S 2 emerges as a local model of an isolated bubble. We then discuss the eight supersymmetries of AdS 3 × S 2 in terms of the global metric and in the Bergman form and show how these supersymmetries are reduced to four if the superconformal symmetry is broken by adding more bubbles or simply making the single-bubbled solution asymptotically flat. In Section 4 we restore the hypermultiplets and consider the solutions of [29]. We discuss the structure of the supersymmetry and how it is further reduced by the presence of wrapped branes and we translate this back into the description of bubbled geometries using GH base geometries. This enables us to show how the supersymmetry will generically be completely broken by wrapping branes on multiple cycles. We argue that the only supersymmetric microstate geometries with wrapped branes are 1 16 -BPS and these involve branes wrapped in the same orientation around co-linear cycles. Moreover, we argue that any solution with branes wrapped in the space-time and with no net charge for these branes necessarily breaks all the supersymmetry. Finally, in Section 5 we discuss the meaning of our result for the study of W-branes and black-hole microstates. In particular, we argue that while the wrapped-brane solutions are not, in themselves, 1 8 -BPS microstates of black holes or black rings, W-branes do provide a way to access and might even enable us to count the BPS microstate structure of 1 8 -BPS black holes and black rings in deep scaling microstate geometries.
The Lagrangian and BPS equations
We work within five-dimensional, N = 2 supergravity coupled to both vector and hypermultiplets. The bosonic action may be taken to be: (2.1) Our goal is to write the action in a manner that is a simple extension of N = 2 supergravity coupled to vector multiplets that is typically used in the discussion of microstate geometries. Our space-time metric is "mostly plus" and we will only have two vector multiplets and hence three vector fields. Thus I, J = 1, 2, 3, and we normalize the A I so that C 123 = 1. The scalars satisfy the constraint X 1 X 2 X 3 = 1 and metric for the kinetic terms is: As usual, it is convenient to introduce three scalar fields, Z I , and take The scalars, q u , are, of course, those of the hypermultiplets. One can easily relate our conventions most simply to those of [35]. Definê then the hatted quantities are those of [35] and we have set κ = 1 √ 2 . The conventions of [36] are very similar, except they use a "mostly minus" metric and thus one must send g µν → −g µν and modify gamma matrices appropriately.
The BPS equations come from setting all the supersymmetry variations of the fermions to zero: The symplectic indices are raised and lowered using and our gamma matrices satisfy 3 The standard bubbled geometries
Bubbled geometries on a Gibbons-Hawking base
We first set all the hypermultiplet scalars to zero and recall the story for bubbled geometries on a GH base manifold, B. The metric takes the form: where the spatial sections on B are the usual, possibly ambi-polar, GH metric with Later it will be useful to adopt axial polars on the R 3 sections of B, in which one has: As usual we take: where r i ≡ | y − y i |. The BPS Ansatz for the Maxwell fields may be decomposed into electric and magnetic components: where B (I) is a one-form on B. The magnetic parts of the field strengths are defined by: The magnetic vector potentials are given by: while the electrostatic potentials are where The remaining parts of the metric are given by: Regularity at each GH then requires that: The parameters ε 0 , ℓ 0 and m 0 are determined by the asymptotics at infinity. Finally, there are the bubble equations that must be satisfied so as to avoid closed timelike curves at each of the GH points. For more details, see [3].
Two centers and AdS 3 × S 2
If one can separate two of the GH centers from the rest and if they are close enough together so that one can ignore the constants, ε 0 and ℓ 0 , then the resulting space-time may be reduced to AdS 3 × S 2 [14,37,38]. The GH potential is simply: where q ± ≥ 0 and r ± ≡ ρ 2 + (z ∓ a) 2 . (3.14) Gauge transformations allow us to shift K I → K I + c I V , which means we can shift the poles in the K I and assume, without loss of generality, that By uplifting to six dimensions one can shift V by one of the K I 's and such a spectral flow can be used to set For simplicity, we will take: where the forms of the L I and M are determined by regularity. One then finds (3.20) The one forms are given by: and shifting and rescaling variables: The metric (3.1) then takes the simple form: where The Maxwell field also dramatically simplifies and (3.5) reduces to: 26) and so, as one would expect, F is proportional to the volume form on the S 2 .
Killing spinors
We continue with all the hypermultiplet scalars set to zero. Since our background obeys the "floating brane Ansatz" [39] the BPS equation (2.6) is trivially satisfied as a result of a cancellation between the connection terms and the Maxwell field strengths. This leaves the equation which determines how all the supersymmetries depend upon the coordinates. Indeed, using thẽ e a frames (3.31) with (2.9) to write products of three gamma matrices in terms of products of two gamma matrices, we find the following differential equations: (3.37) One can trivially solve for the dependence on ξ and θ and the rest can be solved direct by taking derivatives and commuting gamma matrices through the first part of the solution. We find where ǫ j 0 is a constant spinor. Note that there are eight solutions: four components and two choices for j. These solutions contain both the Poincaré and superconformal supersymmetries.
If one uses theê a frames (3.32) then the local Lorentz rotation (3.33) undoes the ξ-dependence and gives ǫ j = e − i 2 θ γ 4 e Observe that if we take the γ a in this expression to be that gamma matrices in theẽ a frames in (3.31), then (3.34) implies that Γ 0 represents the γ 0 matrix of the GH frames, (3.30). The natural Poincaré projection condition in a generic GH space is given by taking Γ 0 ǫ j = ±iǫ j for one choice of sign. Acting with Γ 0 on the spinor in (3.38) gives: This implies that but that the solution space does not respect the projection with the opposite sign. The projection condition (3.42) therefore identifies the Poincaré supersymmetries associated with the general GH space. Note that this is normally recast using (2.9) so as to emphasize the hyper-Kähler property of the base: Alternatively, based on (3.33) and (3.34) one can take the γ a to be those of the Bergman frames, (3.32), and define the "gamma matrix:" This is representative of the γ 0 matrix of the GH frames, (3.30), in the Bergman frames and, acting on (3.39), it leads to the same result as in (3.42) and (3.43). 4 The supersymmetries with hypermultiplet scalars
The hypermultiplet solutions
The background considered in [29] is the half-hypermultiplet parametrized by a complex scalar, τ . The simplest way to satisfy the BPS equations for this is to take τ = τ (z) to be a holomorphic function of the coordinate, z = tanh ζ 2 e iφ on the Bergman base in (3.27). Indeed, the simplest non-trivial solution has: where q * and V ∞ are constants. This locates the wrapped M2 branes at z = 0 with a source proportional to q * . In [29] the solution is written in terms of coordinates (x, ψ) where: The new class of solutions obtained in [29] have a metric with frames where Φ(x) satisfies a non-linear, ordinary differential equation. Observe that we have made some changes of notation and convention compared to [29]. First, we have relabelled t and ψ in [29] byt andψ. This makes the notation in (4.3) consistent with our notation here and avoids the confusion between Im(log(z)) in (4.2) and the GH fiber coordinate, ψ. These coordinates are, of course, related by (3.29). We have also reversed the sign oft relative to t in [29] and flipped the orientation of the frame Eφ. This brings (4.3) into line with the orientations of (3.30)-(3.32). It should be remembered that [29] uses conventions that make the two-forms in the BPS equations of bubbled geometries have the opposite dualities to the standard ones on the GH base. Our modifications restore the canonical forms of these duality conditions.
If one sets q * = 0 and thus removes the wrapped M2 branes then one has [29]: and one finds that the frames (4.3) become precisely the Bergman frames in (3.32) with l = 4k.
The supersymmetries
In [29] it was shown that non-trivial half-hypermultiplet background imposes one additional projection condition on the supersymmetries if the AdS 3 × S 2 background without the wrapped M2 branes. This condition is: where σ 3 is the usual 2 × 2 Pauli spin matrix acting on the N = 2 indices of the spinor and γxψ refers to the product of gamma matrices in the frames (4.3).
(4.9) However, because of the self-duality of the GH base and the projection condition (3.43) in the GH frames, we have γ ab ǫ j GH = − 1 2 ǫ abcd γ cd ǫ j GH , (4.10) and so (4.9) becomes iγxψ → i γ 23 = − γ 014 , (4.11) where we have used (2.9) in the last identity. Now recall that the homology cycles in a GH metric are defined by the ψ-circle fibered along any curve between poles of V . Moreover, the minimum area cycle involves the shortest such curve. Thus, in the GH form of the metric with (3.16), the two cycle is defined by the ψ-circle over the interval along the z-axis between −a and a. From (3.30), the area form of this cycle is e 1 ∧ e 4 . Thus (4.11) corresponds to the projector for the M2 brane wrapping this cycle.
In a general bubbled solution, each wrapped M2 brane will give rise to a supersymmetry projector that depends on the orientation of the brane. More precisely, the supersymmetry projector will depend upon the orientation of the straight line joining the two GH points in the base R 3 parametrized by y in (3.1). The γ 4 in (4.11) will be then replaced by a linear combination of γ a , a = 2, 3, 4. Any two such projectors are compatible (have a common null space) if and only if all the GH points are co-linear and the wrapped branes have the same orientation. Indeed, co-linear wrapped branes with opposite orientations source the Maxwell field with opposite signs and so lead to opposite signs in (4.6). A pair of such opposed projectors manifestly have no common null space.
This has several important consequences for the supersymmetry. First, all the supersymmetry will be broken if the branes wrap cycles that are not co-linear. If the wrapped cycles are all co-linear then supersymmetry will still be broken if the branes wrap in different orientations, determined by the relative signs of the Maxwell fields they source. This means that solutions with wrapped M2 brane but no net wrapped M2-brane charge necessarily break all the supersymmetries. Finally, if all the wrapped branes lie on co-linear cycles and have the same orientation then the projectors of these branes are all the same and the combined effect is that they reduce the supersymmetry by another factor of a half.
As regards the total number of supersymmetries, the AdS 3 × S 2 starts with eight real supersymmetries once the symplectic Majorana condition is imposed on (3.38) or (3.39). If one simply wraps the S 2 , one preserves the conformal invariance and hence the superconformal supersymmetries but one must impose the projector (4.6), and, as shown in [29], this leaves four supersymmetries. If one breaks the conformal invariance by either restoring the asymptotically flat region or by adding more bubbles then one must impose another supersymmetry projector, (3.42) or (3.43), which is compatible with the projector (4.6). This reduces the solution to two supersymmetries, and renders it a 1 16 -BPS background. Thus, if one takes a general 1 8 -BPS bubbled geometry and wraps any single bubble with M2 branes, the result is a 1 16 -BPS solution. If one wraps more than one bubble then all the supersymmetry will be broken unless all the wrapped bubbles are co-linear and are wrapped in the same orientation and only then will it be 1 16 -BPS.
Conclusions
We have shown that wrapped branes do indeed break some, or all, of the supersymmetries in a microstate geometry and that this is governed by the naive supersymmetry projectors associated with the wrapped brane. Thus branes wrapped on a single cycle, as in [27][28][29], should really be viewed as a 1 16 -BPS excitations of a microstate geometry because that is the amount of supersymmetry that will survive once superconformal invariance is broken. This means that such wrapped branes should not be identified with microstates of 1 8 -BPS black holes but should be viewed as (partial) supersymmetry-breaking excitations of such geometries.
Our results also suggest that there might well be interesting 1 16 -BPS generalizations of the results in [28,29] in which the branes wrapped, with the same orientation, on multiple, co-linear 2-cycles on a Kähler base. The starting point for such a set of solutions might be to generalize the GH base geometry to the Kähler bubbled geometries of LeBrun [40] and perhaps try to find BPS bubbled solutions in which one adds hypermultiplets to the work of [41,42].
On the other hand, one cannot solve the tadpole problem supersymmetrically without decompactifying the compact directions. To remove the tadpoles, the M2 branes wrapped on space-time cycles must have no net charge. This can be achieved by wrapping branes around cycles in a closed quiver but this would involve multiple, incompatible supersymmetry projectors. One might try to use co-linear cycles wrapped in opposite orientations but this would lead to projectors with Γ 012 ǫ = ± ǫ , where the sign depends on orientation, and so there would be no residual supersymmetry. Therefore, any wrapped brane configuration that solves the tadpole problem without decompactification will necessarily break all the supersymmetries. The fact that wrapped branes do not preserve the supersymmetries of a given microstate geometry means that they should not be viewed as supersymmetric microstates. However, this does not mean that they cannot be used to describe the microstate structure. Indeed, we believe that W-branes can access the 1 8 -BPS structure of black holes and that counting W-brane configurations will enable one to enumerate the ground-state degeneracy of 1 8 -BPS black holes. First, Martinec and Niehoff [25] point out that the fact that W-branes are becoming light in the scaling limit means that there will be a new phase of stringy physics emerging in the deep scaling regime of microstate geometries. They argue that the W-branes will probably form condensates and new operators will develop vevs and define order parameters in that new phase. Quiver quantum mechanics confirms this picture rather nicely: when the D6 branes are widely separated, the W-branes are massive Higgs excitations of the system. When the branes coincide, the Higgs fields become massless and the Higgs branch opens up. The ground-state degeneracy determined in [12,[21][22][23][24] amounts to counting all the different vacua on that Higgs branch. Thus W-branes are one-particle excitations on a new massless branch of physics that is opening up in deep scaling solutions. To think of W-branes as microstates is to confuse particle excitations with condensates and ground states.
A simple toy-model is, perhaps, helpful here. Consider the N = 2 Landau-Ginzburg theory in 1 + 1 dimensions with superpotential W = x n+2 + ax, where x is the complex Landau-Ginzburg field and a is a parameter. This model has a residual discrete Z n+1 R-symmetry. The F-term constraint shows that there are n + 1 vacua (Ramond ground states) preserving N = 2 supersymmetry. Between each pair of vacua, there are minimum energy 1 2 -BPS kinks, or solitions carrying a discrete R-charge. Individual solitons thus preserve only N = 1 supersymmetry and multi-soliton states break all the supersymmetry. (See, for example, [43].) The limit a → 0 corresponds to the n th N = 2 superconformal minimal model [44][45][46]. At the conformal point the Ramond vacuum has an (n + 1)-fold degeneracy and it is the chiral primary fields that interpolate between these states. In the limit a → 0, the solitons become massless and are related to combinations of left-moving and right-moving chiral primaries. For a = 0, the 1 2 -BPS solitons reflect the fundamental degrees of freedom of the massless field theory and yield information about its ground state structure. In this sense, W-branes in microstate geometries are, relative to the supersymmetry of the microstate geometry, 1 2 -BPS excitations that reflect the new massless degrees of freedom and the ground state structure that will emerge in the deep scaling limit. Thus, while W -branes are not the 1 8 -BPS microstates of the black hole, they do reveal degrees of freedom that will play an essential role in accessing a large component of the microstate structure.
Another important aspect of the supergravity approach taken in [27][28][29] is the smearing of the branes on the compactification manifold that reduces the problem to five dimensions. Ignoring the degeneracy of states in the lowest Landau level will, of course, do huge violence to to the state counting and make vast families of distinct W-branes look exactly the same in supergravity. Indeed, it will collapse the W-brane states to simply the number of ways of wrapping non-trivial cycles in the space-time. In the field theory on the branes this would wash out most of the interesting structure of the new phase that emerges in the deep scaling limit. It would therefore be extremely interesting to see how one might describe distinct W-branes without smearing in supergravity. This will almost certainly mean working with higher-dimensional supergravity theories and finding ways of modeling the distinct Landau orbits or states within the lowest Landau level. | 8,339.8 | 2016-08-13T00:00:00.000 | [
"Physics"
] |
Reranking candidate gene models with cross-species comparison for improved gene prediction
Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc). Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.
Background
Cross-species comparisons have been shown to be effective in locating genes and predicting gene structures. De novo gene finders such as SGP2 [1], TWINSCAN [2,3], NSCAN [4], SLAM [5], SAGA [6], DOGFISH [7], EXONI-PHY [8], SHADOWER [9], CONTRAST [10] have improved upon ab initio gene finders through comparison with genomic sequences of reference species, capturing phylogenetic footprints, as coding sequences tend to be relatively highly conserved. Reference-based gene finders such as DPS [11], Rosetta [12], Procrustes [13], GenomeScan [14], Projector [15], GeneWise [16], GeneMapper [17] and ExonHunter [18] have sought to predict genes in target species through alignment with genes or proteins from reference species, modeling substitution patterns, gaps, exon/intron length distribution, signals, and other potentially conserved features. Augustus+ [19,20] extends Augustus [21] by incorporating alignments with genes and proteins of reference species into its ab initio gene model. FgenesH++ [22] also extends an existing ab initio prediction model with comparative evidence. Broadly speaking, all of these gene finders employ the strategy of adding comparative side-information to an existing ab initio model; genome annotation pipelines such as EnsEMBL [23] and UCSC Known Genes [24] add comparative components to ab initio models and expressed-sequence data sources. JIGSAW [25,26] employs a somewhat different strategy where ab initio and orthologous proteins are treated as sources of evidence and integrated. All of these gene finders effectively incorporate cross-species information, achieving improvement in prediction accuracy over single-species gene finders, although doing so often requires significant effort in model and algorithm design and implementation to cast comparative information into a form compatible with the existing gene models.
We have developed a simple, yet effective, reranking approach for incorporating cross-species information as a post-processing step after initial gene prediction, obviating the need to build a new gene finder or laboriously modifying an existing one to incorporate comparative information. Reranking the K best hypotheses has been an effective technique in natural language processing systems [27][28][29]. For example, in speech recognition, it is a widely adopted practice to generate the K best recognition hypotheses with a fast one-pass recognizer, and then rerank them based on probabilities given by a more powerful language model [30]. The gene finder Evigan [31] integrates diverse sources of evidence, yielding a ranked list of the top K candidate gene models, which may then be reranked by comparing them with reference genes from closely related species. Gene models with good (but not necessarily best) probabilities defined by Evigan that also exhibit strong similarity to reference genes may thus be selected as most likely.
Results and discussion
To assess the feasibility and accuracy of reranking candidate gene models based on cross-species comparison, we conducted an experiment seeking to identify gene models in the genome of Drosophila melanogaster (stripped of all annotation), using D. pseudoobscura as reference species. D. melanogaster was selected because the extensive effort that has been devoted to gene annotation in this species provides a "gold standard" for assessing performance.
All data used in this experiment were downloaded from FlyBase [32], including: • Whole genome sequence for D. melanogaster (Release 5.1), used as the target genome for gene model predictions.
• Annotated gene models of D. pseudoobscura (Release 2.0), used as reference for reranking candidate gene models in D. melanogaster.
• Annotated gene models of D. melanogaster, used as training set for estimating reranker parameters, and also as a standard for evaluating prediction accuracy.
Evigan is a recently developed gene finder that integrates diverse sources of evidence, including predictions from multiple other gene finders. Using a dynamic Bayesian network to create consensus predictions based on the patterns of agreement and disagreement between the evidence sources, Evigan produces more accurate calls than any of the individual gene finders used as sources [31]. As output Evigan provides a list of the K gene models with the highest probabilities according to its evidence integration network.
Among the five source gene finders used as input for Evigan, four (Augustus [21], Genscan [33], Genie [34] and GeneID [35]) predict genes by examining the D. melanogaster genomic sequence and modeling the nucleotide composition surrounding start, stop, splice donor and splice acceptor sites, codon usage and coding potential, exon length distribution, and other sequence features. The CONTRAST [10] gene finder predicts D. melanogaster genes based on conservation with a reference species genome, motivated by the assumption that coding sequence is more likely to be conserved than non-coding sequence. Although CONTRAST uses genomic nucleotide sequence information from another species, none of the source gene finders uses gene models or proteins to improve gene model predictions.
Using the source gene finders' prediction sets, Evigan identified 13,669 gene loci in the D. melanogaster genome (see Methods). For each locus, Evigan was then used to generate the K best candidate models (K ≤ 100), along with the probability for each model [31]. Figure 1 shows the number of candidate models identified per locus, and the number of exons per gene. Fewer than 20 candidate gene models were identified for 83% of the loci, although some loci contain as many as 100 competing models. In general, the number of plausible candidate models at a locus is a function of the number of exons for this gene: for loci exhibiting an average of < 5 exons per gene, Evigan identified a median number of 5 candidate models per locus; a median of 33 candidate models were identified for loci having an average of ≥ 5 exons per gene. The number of candidate models per locus identified by Evigan is based on the agreement among available evidence sources (gene finders). Disagreements about exon call multiply out for multi-exon genes, explaining the abundance of candidate models for those genes.
To identify genes where cross-species comparison might permit reranking of alternative gene models, D. melanogaster loci predicted by Evigan were filtered to identify those where: (i) Evigan suggested multiple candidate gene models, and (ii) putative orthologs (see "Methods") were identified in D. pseudoobscura. As indicated in Table 1, Evigan identified 13,669 genes in the entire D. melanogaster genome (some of the 14,550 genes curated in D. malanogaster release v5.1 were not recognized by any of the source gene finders, or only by a small subset, and were therefore not identified as probable genes by Evigan). Multiple candidate models were identified for 11,701 genes (86%), and 9125 genes (67%) were paired with D. pseudoobscura genes as putative orthologs based on reciprocal best BLAST hits [36]; 7975 loci exhibited both multiple candidate models and putative orthologs. A small sample (2.5%, 198 loci) of these genes were randomly selected as a training set for estimating reranking parameters, and the remainder (7777 loci) were used to test the reranking algorithm. Note that the five source gene pre-Number of candidate gene models per gene locus and number of exons per gene on Drosophila melanogaster Blue bars provide a histogram showing the number of candidate gene models per locus, as identified by Evigan-5g. The red scatter plot shows the number of candidate gene models per locus versus the number of exons per gene (average number of exons per candidate where multiple candiates are predicted). Note that only a few candidate models are suggested for most genes; those with many candidate models predicted typically contain many exons.
dictors were trained on their specific training sets, but it is not very likely that they significantly overlap with the 198 loci randomly selected for estimating reranking parameters; otherwise the reranking parameters estimated from the training set would be biased and result in poor performance.
The performance of Evigan-5g (which combines the five ab initio source gene finders) and of ReRanker-5g (which uses cross-species comparison to rerank the K best candidate gene models produced by Evigan-5g), were compared against curated annotation of the D. melanogaster genome (release 5.1). Performance metrics include sensitivity and specificity on the gene, transcript and exon level (see "Methods" for details), and the evaluation software Eval [37] was used. As indicated in the top section of Table 2, ReRanker-5g always performs better than Evigan-5g, in terms of both sensitivity and specificity, at the exon, transcript and gene levels for the genome, improving on the advantage that Evigan typically shows over any of the sources of evidence it integrates [31]. ReRanker-5g selected the highest probability Evigan-5g model for 6031 loci (by construction for these loci Evigan-5g and ReRanker-5g have the same prediction and thus the same performance); 4333 of these (71.9%), agree with the D. melanogaster genome annotation. Of the remaining 1746 loci where ReRanker-5g selected a lower probability Evigan-5g model, the highest probability Evigan-5g model was correct in only 252 cases (14.4%) whereas gene models selected by ReRanker-5g were correct for 500 cases (28.6%), indicating much better performance of ReRanker-5g than Evigan-5g. Results on these 1746 loci are shown in Table 3. The performance of Evigan-5g is relatively poor on these loci where genes contain relatively more exons (6.6 exons per gene on average for Table 3 versus 4.6 exons per gene for Table 2) reflecting the difficulties that genes of more exons pose to ab initio gene finders.
Reranking candidate Evigan models based on sequence homology with D. pseudoobscura, however, significantly increases performance for these genes. When offered a selection of alternative gene models, cross-species comparison frequently allows ReRanker to select the correct models.
When ReRanker selects an alternative model, does it always choose the next most probable candidate from the list of possibilities defined by Evigan? Figure 2 presents the frequency and performance of ReRanker selections, as a function of Evigan rank. ReRanker selected the second to the fifth most probable Evigan model in 820 genes, the sixth to the tenth most probable model in 228 genes; and even lower probability models for 698 genes. Comparison with the annotated D. melanogaster genome indicates that even when relatively low ranking models were selected by the reranking algorithm, these are more likely to be correct than the top probability Evigan model: the red lines (ReRanker-5g) are higher than blue lines (Evigan-5g) in Figure 2 for all exon, transcript and gene levels.
Incorporating Genie's prediction into ReRanker-5g (through Evigan-5g) could have introduced a circularity, because ReRanker's performance was evaluated on D. melanogaster annotations, which were developed with the help of Genie. However, this does not appear to be the case, since both ReRanker-5g and Evigan-5g significantly outperform Genie for sensitivity and specificity on the gene, transcript and exon level, as shown in Table 2. In fact, ReRanker-5g and Evigan-5g significantly outperform all of the five ab initio predictors used as evidence sources for Evigan-5g (Table 2).
Another factor that migh raise concerns of circularity in our evaluations is ReRanker's use of D. pseudoobscura as a reference for gene prediction on D. melanogaster, since the Genes with multiple Evigan-5g candidate models 11,701 Genes with putative orthologs in D. pseudoobscura 9,125 Intersection (genes with multiple candidate models and putative orthologs) 7,975 Training set (2.5% of intersection, randomly selected) 1,98 Test set (used for Table 2) 7,777 Genes where ReRanker-5g selected the highest probability Evigan-5g model 6,031 Genes where ReRanker-5g selected a lower probability Evigan-5g model (used for former was annotated based on the latter. To address this, performance of ReRanker-5g and Evigan-5g were evaluated on 1191 D. melanogaster loci whose putative orthologs on D. pseudoobscura have EST support. Over 38,000 EST sequences for D. pseudoobscura were obtained from dbEST [38] and aligned to D. pseudoobscura annotated transcripts that were identified as the putative orthologs of the entire test set (7777 transcripts), using BLAST (E-value cutoff of 1e-5). The transcripts where the aligned length covers more than half of the transcript length were retained as a subset having independent experimental support, resulting in 1191 transcripts. On the D. melanogaster loci which are these transcripts' putative orthologs, performance of ReRanker-5g and Evigan-5g were evaluated and presented in Table 4. ReRanker-5g outperforms Evigan-5g for sensitivity and specificity at the gene, transcript and exon level. The evaluation on the subset with independent experimental evidence suggests that ReRanker's improved performance is not likely to be attributed to the two species' related annotation process.
A widely used alternative approach for using cross-species information in gene prediction involves aligning reference gene models (or proteins) to the target genome, and using these alignments to either build new gene finders, or modify existing ab initio ones by explicitly modeling alignments. GeneWise [16] aligns protein sequences to target genome sequences and uses the alignments to hypothesize introns, amino acid mutation patterns, sequencing errors, exon length statistics, and other gene prediction signals.
Augustus+ [20] extends the ab initio gene finder Augustus [21] by considering transcript or protein alignments as extrinsic hints, up-or down-weighting ab initio gene parses based on consistency with the alignments. The bottom half of Tables 2 and Table 3 compares the perform- Performance on the entire D. melanogaster test set of 7777 loci (see Table 1). Augustus, CONTRAST, Geneid, Genie and Genscan are ab initio predictors used as evidence sources for Evigan-5g. ReRanker-5g selects among K-best gene models produced by Evigan-5g with cross-species information. GeneWise, Augustus+ and Evigan-6g are other comparative gene predictors or approaches. Bold indicates where ReRanker-5g outperforms Evigan-5g; italics indicates where other comparative approaches outperform ReRanker-5g (see text).
ance of ReRanker-5g with GeneWise and Augustus+ on the complete D. melanogaster test set, and the subset of genes where Evigan-5g and ReRanker-5g chose different models. GeneWise was employed to align D. pseudoobscura proteins to their putative orthologous loci in D. melanogaster (using default parameters). GeneWise predictions of CDS, donor, acceptor, start and stop information were then provided as extrinsic hints for Augus-tus+ (using ab initio parameters trained for Drosophila melanogaster and default parameters for extrinsic protein hints). Evigan was also run to integrate GeneWise models with the five ab initio source gene finders described above, yielding Evigan-6g.
Augustus+, GeneWise and some other comparative predictors do not need ortholog detection; rather they align reference genes or proteins to a target genome and then refine signal predictions for significant hits. This strategy tends to identify relatively more target genes and thus enjoy higher sensitivity. For ReRanker, where putative ortholog detection is needed, if ortholog detection for a gene fails, ReRanker misses the opportunity to locate the gene and will thus show lower sensitivity. However, ReRanker's main goal is to improve specificity by improving the prediction of exact structures for genes whose existence and rough locations have been reasonably validated.
On the whole D. melanogaster test set (Table 2), ReRanker-5g outperformed GeneWise and Augustus+ in terms of both sensitivity and specificity, at the exon-, transcript-, and gene-level (although GeneWise exhibited slightly greater specificity, and Augustus+ slightly greater sensitivity, in recognizing internal exons). ReRanker-5g also outperformed Evigan-6g as assessed by all criteria except for overall and internal exon sensitivity. In cases where ReRanker-5g and Evigan-5g make different choices (Table 3), ReRanker-5g outperforms GeneWise and Evigan-6g, but performs worse than Augustus+ in most categories. The better performance of Augustus+ (on this subset of genes, but not the genome as a whole; Table 2) may arise from increased sensitivity by using homology information in its ab initio model used to search for gene segments. The relatively poor performance of ReRanker-5g would then follow from a relative lack of candidates in the source evidence: ReRanker-5g is constrained to select from among the potential models suggested by Evigan-5g, which performs relatively poorly on this subset. These observations highlight the extent to which Evigan and ReRanker are limited by available sources of evidence, in particular the source gene finders, but it is important to note that the inclusion of additional gene predictions in the mix is likely to improve the performance of Evigan and thus ReRanker.
Conclusion
We have demonstrated that ReRanker leads to improvement in prediction accuracy through a simple strategy of incorporating additional evidence. There are many directions along which the work can be extended or improved. The first step of the reranking strategy is to identify singlegene loci on a target species. If this step finds incorrect loci, such as loci that contain more than one gene, partial genes or pseudo genes, it could mislead ReRanker, which assumes that a locus contains a single gene. In the ortholog identification step, most of those wrong loci will be removed because they tend to not be associated with orthologs from a reference species. But it would be very useful to devise an additional step before reranking, to identify problematic loci and even recover correct locus Performance on the 1191 D. melanogaster loci whose putative orthologs on D. pseudoobscura are supported by EST sequences (see text for details). Note that ReRanker-5g improves on Evigan-5g across the board (improvement indicated by bold).
Performance by rank on Drosophila melanogaster Figure 2 Performance by rank on Drosophila melanogaster. The table on the bottom right shows the number of loci where ReRanker-5g selects Evigan-5g candidate gene models of certain rank. For example, there are 6031 loci where ReRanker selects the most probable candidate models as defined by Evigan; there are 820 loci where ReRanker-5g selects the second to the fifth most probable candidate models as defined by Evigan, and so on. The other panels show the F-score (harmonic mean of sensitivity and specificity) of Evigan-5g and ReRanker-5g at the exon, transcript and gene levels for various rank ranges.
ReRanker is successful at improving the identification of correct gene models even when selected candidates are far from the top of the list provided by Evigan. information. The reranking strategy is sufficiently general, in the sense that it is neither specific to Evigan candidate gene models, nor limited to incorporating information from cross-species comparisons. The same conceptual strategy could readily be applied to candidate gene models produced by other annotation pipelines as well as accommodate diverse sources of evidence in place of or in addition to comparative genomics data. For example, one can easily envision further improving gene models selection by reranking based on protein sequence motifs or signals, transcript or protein expression data, etc. In addition it is natural to relate Evigan's K-best gene models to alternative transcripts, which might allow us to extend ReRanker for predicting multiple transcripts on a target species, if the putative ortholog on a reference species exhibits alternative transcripts.
Methods
This section details how ReRanker prioritizes candidate gene models on a target species by comparison with orthologs from a reference species. Subsections address the generation of candidate gene models, ortholog identification between the two species, the construction of similarity features between gene models, the format of scoring function of candidate gene models and learning of the reranker's scoring parameters.
Generating candidate gene models
Gene loci on the target species were first defined and candidate gene models for each locus are generated by Evigan. The term gene locus refers to a genomic region containing only a single gene. Gene loci on the target species were first identified by an initial prediction gene set produced by Evigan integrating multiple lines of evidence (Augustus, Genscan, Genie, Geneid, CONTRAST were used in the experiment). The genomic region defined by each gene in the initial prediction set is extended in both directions on the genomic sequence until the neighboring predicted genes are reached. Each such extended region is a gene locus. This procedure often produces thousands or tens of thousands of gene loci on the target species, depending on the size of the genome and the Evigan initial prediction set.
For each proposed gene locus, Evigan was used to generate the K best candidate gene models for the gene with the posterior probability for each, by integrating the evidence overlapping with the region. K is a parameter passed to the K best decoder in Evigan as the maximum number of alternative paths to be generated. If the aggregated evidence at this locus supports less than K candidate gene models, all possible models will be generated. The K-best decoder [39] in Evigan uses a variation of the Viterbi decoding algorithm [40,41] to search for high probability paths, with O(K N log N) computational complexity where N is the size of the standard Viterbi trellis, which is quite efficient. In the original Viterbi decoding implementation of Evigan, an optimal path may contain multiple genes, whereas in the implementation of the K-best decoder only single-gene paths are returned. Note that the best candidate in the K-best list for a locus may or may not be exactly the same as the initially predicted gene used to identify the locus. In practice, however, discrepancy is rarely observed.
Ortholog identification
Ortholog pairs between the target species and a reference species are identified by BLASTP [42] reciprocal best hits between the best candidate models (translated into protein products) on the target species and the proteins on the reference species. Specifically, if a gene's best candidate model on the target species and a protein from the reference species are reciprocal best hits by running BLASTP(default parameters, e-value cutoff set as 1e-5), they are considered as an ortholog pair. This is a rather simplified approach for identifying orthologs but in practice it produces reasonably good results. More comprehensive approaches would be searching all candidate models of a gene against the reference proteins or examining multiple species and phylogentic relationships between the species [36,43].
Reranking features
A variety of features were extracted from candidate gene models, including the posterier probabilities defined by Evigan and various similarity features determined by comparison with orthologous proteins/gene models. Note that these features could readily be expanded to include additional informative similarity features. In the current implementation, six features on a candidate gene model were extracted, as described below. Let t and r denote a candidate gene model (or its translated protein) on the target species and a protein/gene on the reference species, respectively.
Posterior probability
Let p(t) denote t's Evigan posterior probability given the evidence. The probability feature f 1 (t) is the logarithm of p(t):
Length similarity
Let l(t) and l(r) denote the coding sequence length of t and r. The length similarity feature f 2 (t) is given by f t l t l r l r The absolute difference in the coding length of the two genes |l(t) -l(r)| is normalized by the coding length l(r) of the reference gene. (Normalizing by the coding length of the target gene model is not a good idea, because it may bias towards target candidate gene models that are very long or short.) The +1 term in the numerator and denominator smoothes the counts.
Splice count similarity
As with coding length, we also compare the number of splice sites in source and target. Let s(t) and s(t) denote the number of splice sites of t and r. The splice site feature f 3 (t) is given by Again, the +1 term in the numerator and denominator smoothes the counts, and also prevents division by zero.
Sequence similarity
The sequence similarity feature between t and r is computed from the alignment score given by DiAlign [44], a multiple sequence alignment program. When two sequences are aligned, DiAlign first searches for multiple gapless local alignments, referred to as segments, and then constructs a global alignment between the two sequences by searching for the best set of consistent segments. In addition to producing gapless local alignments, DiAlign also provides for each segment an alignment score, which is basically the negative logarithm of the probability that two random sequences can be aligned as well as these two sequences. Suppose the coding sequences of t and r are aligned by DiAlign (translated alignment) and let A(t, r) denote the sum of the alignment scores for the segments constituting the global alignment and A(t, r) is roughly linear to the length of t and r. The sequence similarity feature f 4 (t) is given by normalizing the alignment score by the length of r, or
Shared splice sites
The segments produced by DiAlign can be used to extract another useful similarity feature: shared splice sites. Figure 3 shows the alignment betweeen the coding sequences of t and r output by DiAlign, where blue boxes represent gapless local alignments and wavy lines represent unaligned regions. Splice sites of t and r are mapped to the segments, as shown by the arrows in the figure. If a splice site of t and a splice site of r are mapped to the same relative position within a segment, as exemplified by the first and third pairs of splice sites in the figure, they are identified as a shared splice site. Let C(t, r) denote the number of shared splice sites identified by the above approach. The shared splice feature f 5 (t) is given by
Signal peptides
A signal peptide feature, f 6 (t), represents the co-occurence of predicted signal peptide on t and r. The presence or absence of signal peptides on t and r is predicted by sig-nalP-3.0 [45]. Let S(t) and S(r) denote the presence or absence of signal peptides on t and r. Then the feature f 6 (t) is given by If the reference gene contains a signal peptide, target candidate gene models with signal peptides are preferred; If the reference gene does not contain signal peptide, no preference is imposed on target candidate gene models. The one-sided nature of the feature is motivated by the relatively low abundance of signal peptides and the observation that signal peptide detection algorithms tend to focus on sensitivity rather than specificity. If the reference gene does not have a signal peptide while a target candidate model does, the candidate will not be penalized.
Scoring function
The features just described are used to compute a score S(t) for each candidate gene model t. The features of t are arranged into a feature vector f(t), and the score is defined by the inner product S(t) = f(t)·w, where w is a weight vector that will be learned from training data. Given K candidate gene models t 1 , ..., t K , the index of the highest scoring model is given by the decision rule
Weight estimation
The parameter weight vector w in the scoring function is estimated from a training set D to optimize reranking accuracy using the MIRA online large-margin learning algorithm [46].
The training set D = {e 1 , ..., e N } is a set of training examples, where each example e ∈ D contains the set of candidate models for a training gene. More specifically, each e ∈ D has the form e = {(t k , q k )|k = 1, ..., K} where t k is a candidate model and q k is the quality of t k relative to the reference annotation. In our experiments, q k is the exon-level F-score (harmonic mean of sensitivity and specificity) for t k relative to the reference annotation genes at t k 's locus.
f t s t s r s r The MIRA learning algorithm [46] learns w by looping over the training examples and updating w at each example so that lowest-error candidate model is selected for the example by the decision rule given above. The weight vector w is initially the zero vector. The pseudocode "Outline of MIRA update" shows a single cycle of updating the weight vector. At each round, the algorithm fetches an example e from the training set, reranks its candidate models and selects the best predicted candidate t k* using the current weight vector. The true best candidate is denote by , given by the maximum quality assessment.
The algorithm updates the weight vector by solving an optimization problem. The goals of the optimization problem are two-fold: keep the new weight vector as close to the current weight vector as possible; and score the true best candidate higher than the predicted candidate by their quality difference . C is a weight factor balancing the two goals, which is set to 5 in the experiments. The algorithm will loop over the examples in the training set until the weight vector does not change significantly.
Outline of MIRA update
Given an example e = {(t k , q k )|k = 1, ..., K} and a current weight vector w n , the updated weight vector w n+1 ← MIRAupdate(e, w n ) is computed as follows: -Use the current weight vector w n to rank the candidate models and select the index for best predicted candidate by k* = arg max k = 1...K f(t k )·w n -Let be the index of the true best candidate = arg max k = 1...K q k -Find the solution w, ξ for the following optimization problem: -Set w n+1 = w.
It is common practice to consider the average of the updated weight vector at each round as the final output weight vector, because the average weight vector often gives better performance than individual weight vectors [46]. The pseudo-code titled "MIRA algorithm wrapper" shows an algorithm wrapper that calls the MIRA update as a subroutine at each round and outputs a final weight vector. Infering shared splice sites from alignement Figure 3 Infering shared splice sites from alignement. Blue boxes represent segments (local alignments) produced by DiAlign [44] between coding sequences of two gene models and the wavy lines represent unaligned regions. Arrows represent mapped splice sites. The first and third pairs of overlapping splice sites are identified as shared splice sites. generated by Evigan; ReRanker's prediction is the candidate model with the highest reranking score as described above. Performance of prediction sets is assessed by sensitivity and specificity on exon, transcript and gene level using the Eval program [37] (only coding parts were evaluated). Sensitivity is defined as the fraction of annotated exons (or genes) predicted correctly. Specificity is the fraction of the predicted exons (or genes) that correspond precisely to any exon (or gene) in the curated annotation set. F-score is the harmonic mean of sensitivity and specificity. An exon is considered correct if its boundaries and reading frame are both correct. A gene is counted correct if all of its exons are precisely predicted. For genes with multiple transcripts, sensitivity and specificity were determined at the exon, transcript and gene levels. A transcript is considered correct if all its exons are accurately predicted. A gene is counted correct if one of its transcripts is predicted correctly. | 7,386.4 | 2008-10-14T00:00:00.000 | [
"Biology"
] |
Cell-Laden Composite Hydrogel Bioinks with Human Bone Allograft Particles to Enhance Stem Cell Osteogenesis
There is a growing demand for bone graft substitutes that mimic the extracellular matrix properties of the native bone tissue to enhance stem cell osteogenesis. Composite hydrogels containing human bone allograft particles are particularly interesting due to inherent bioactivity of the allograft tissue. Here, we report a novel photocurable composite hydrogel bioink for bone tissue engineering. Our composite bioink is formulated by incorporating human allograft bone particles in a methacrylated alginate formulation to enhance adult human mesenchymal stem cell (hMSC) osteogenesis. Detailed rheology and printability studies confirm suitability of our composite bioinks for extrusion-based 3D bioprinting technology. In vitro studies reveal high cell viability (~90%) for hMSCs up to 28 days of culture within 3D bioprinted composite scaffolds. When cultured within bioprinted composite scaffolds, hMSCs show significantly enhanced osteogenic differentiation as compared to neat scaffolds based on alkaline phosphatase activity, calcium deposition, and osteocalcin expression.
Introduction
Large bone defects caused by traumatic injury, disease, infection, tumor removal, fracture, and complicated congenital malformation are difficult to treat as the size of the defect is beyond the intrinsic capacity of self-regeneration of the bone [1][2][3]. Autograft bone, i.e., bone tissue from a patient's own body, is the gold standard for bone grafting to treat large bone defects [4][5][6][7][8]. Limitations of autograft bone, including availability of large enough bone tissue and complications in the harvesting site, such as infection, pain, and bleeding, have led to a search for alternative grafting options [5]. Bone allografts, i.e., human bone tissue from donors, and synthetic bone graft substitutes, including porous scaffolds composed of biodegradable polymers, bioceramics, and their composites, are commonly used alternatives [9][10][11][12]. Allograft bone has recently gained significant interest due to its inherent bioactivity, such as osteoconductive and osteoinductive characteristics [4,13]. Possibility of implant rejection (immunogenicity) and disease transmission are currently limiting the direct use of commercially available allograft bone tissues [14,15]. Decellularization of the allograft bone is shown to effectively reduce the risk of immunogenicity [13,16,17] and coating the allograft with different minerals is reported to enhance bone mineral deposition and functional integration of the allograft by decreasing the fibrotic tissue formation [18][19][20][21]. Despite these advancements, obtaining a large-size allograft bone that fits perfectly into the defect site, considering the size and shape of the defect, and sterilization of large-scale bone tissue without damaging structure and function remains challenging [22][23][24].
To overcome the abovementioned issues raised by the direct use of the allograft bone, allograft bone can be used as a building material to construct a scaffold that can be used as ized to synthesize photocurable MeALG polymer [85,86]. Methacrylate groups allow for light-induced radical polymerization (or crosslinking) to form hydrogels, and for tethering of bioactive cues in the form of cysteine-containing peptides via addition reaction [85]. Although bioceramics, such as hydroxyapatite [87], bioactive glass [88], silica [89], and calcium phosphate derivatives [90][91][92][93], are commonly incorporated into alginate to form composite hydrogels for bone tissue engineering, studies focusing on human bone allograft particles are lacking. This study aims to address this gap. Here, we report processing of human allograft tissue to form micron size particles and formulation of composite bioinks using these particles. We present detailed characterization of the rheological properties of the composite bioinks and their printability as well as mechanical behavior of the 3D bioprinted constructs. We demonstrate the use of our composite bioink formulations for bone tissue engineering by investigating human mesenchymal stem cell (hMSC) osteogenesis within 3D bioprinted constructs up to 28 days of culture, based on alkaline phosphatase and alizarin red assays as well as osteocalcin immunostaining.
Methacrylated Alginate (MeALG) Synthesis
Methacrylated alginate (MeALG) was synthesized as described previously [85,94]. Briefly, 0.5% (w/v) was prepared by dissolving 5 g of medium viscosity alginate (alginic acid sodium salt from brown algae, Sigma-Aldrich Inc., St. Louis, MO, USA) in 1 L of DI water. The solution was kept under magnetic stirring at 1-4 • C. Once the alginate was fully dissolved, 10 mL of methacrylate anhydride (MA, Sigma-Aldrich Inc., St. Louis, MO, USA) was added dropwise into the solution within a span of 1.5-2 h. 2 M NaOH solution (Sigma-Aldrich Inc., St. Louis, MO, USA) was simultaneously added dropwise to adjust the pH of the solution to [8][9]. After the addition of the MA, pH of the mixture was maintained by gradually dripping 2 M NaOH solution for 8 h using an automated pH controller. The solution was kept at 4 • C overnight. The reaction was resumed the following day by adding 5 mL of MA while maintaining the pH at 8-9. The material was then dialyzed (Spectra/Por ® 1 dialysis membrane, 6-8 kDa, Fisher Scientific, Pittsburgh, PA, USA) against DI water for 5 days and lyophilized using a benchtop freeze dryer (Labconco FreeZone 4.5 L, Fisher Scientific, Pittsburgh, PA, USA). 1 H NMR (Bruker Advance III HD 500 MHz, Bruker Scientific, Billerica, MA, USA) was used to confirm the methacrylate percentage as described previously [85].
Bone Particle Processing
Cancellous allograft bone (crushed cancellous bone from a 53-year-old male) was kindly provided by the Musculoskeletal Tissue Foundation (MTF) Biologics (Edison, NJ, USA). Crushed bone pieces were pulverized manually for 13 h by using a mortar and pestle set (JMD050, Deep Form, 50 mL, United Scientific Supplies Inc., Libertyville, IL, USA). A Mastersizer 3000 particle size analyzer (Malvern Panalytical Inc., Westborough, MA, USA) was used to study the particle size distribution.
Composite Bioink Preparation
Composite bioink formulation was prepared by dissolving MeALG powder, bone particles and photo initiator (LAP, lithium phenyl-2,4,6-trimethylbenzoylphosphinate, VWR International, Wayne, PA, USA) in phosphate-buffered saline (PBS, Fisher Scientific, Pittsburgh, PA, USA). First, LAP stock solution was prepared by dissolving 0.1% (w/v) LAP in phosphate buffered saline (PBS). Then, 3% (w/v) MeALG and 1% (w/v) bone particles were added into the LAP stock solution and kept under magnetic stirring for 5 days. To prepare 2 mL of ink, 0.06 g of MeALG and 0.02 g of bone particles were added into 0.1% LAP stock solution.
For in vitro culture studies, cell-laden composite bioinks were prepared by adding human mesenchymal stem cells (hMSCs, Lonza Walkersville Inc., MD, USA) in the composite ink formulation (~3 million cells per mL of ink formulation). For this purpose, hMSCs (passage 4, Lonza) were cultured in growth media (α-MEM (Gibco, Thermo Fisher Scientific LLC, Asheville, NC, USA) supplemented with 10% fetal bovine serum (FBS, Gibco, Thermo Fisher Scientific LLC, Asheville, NC, USA) and 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific LLC, Asheville, NC, USA) for~80% confluency. Prior to dissolving polymer and bone particles, they were sterilized under ultraviolet germicidal irradiation for 2 h. LAP stock solution was filtered using a sterile syringe filter (0.22 µm, Sigma-Aldrich Inc., St. Louis, MO, USA). Subsequently, cells were mixed with ink under magnetic stirring for 20 min prior to 3D bioprinting.
Rheology
Kinexus Ultra + rheometer (Netzsch Instruments North America LLC, Burlington, MA, USA) was used to study the rheology of the bioinks. A parallel plate geometry (20 mm plate size and 0.7 mm gap size) was used. The viscosity of the ink was measured with respect to shear rate (0.01 to 1000 s −1 ). Strain sweep (0.5-300% at 1 Hz) and frequency sweep (0.1-100 Hz at 0.05% strain) tests were conducted to study the evolution of the elastic modulus (G ) and viscous modulus (G"). To investigate the light-induced crosslinking process, an optical kit (Netzsch) connected to a UV light source (Omnicure S2000 Excelitas Technologies, Chicago, IL, USA, 356 nm, 10 mW/cm 2 ) was used. G and G" were monitored with time (at 1 Hz). Light intensity was adjusted to represent the intensity during printing process (405 nm, 40 mW/cm 2 ) according to the molar absorptivity spectrum of the photoinitiator (LAP) [45]. After 2 min of equilibrium, ink solution was exposed to the light (10 mW/cm 2 ) for 20 min to fully crosslink the sample.
Optimization of 3D Bioprinting Parameters
In this study, we used a BIO X bioprinter (CELLINK LLC, Boston, MA, USA) with syringe-based print head and a 25G needle size with 0.25 mm internal diameter (Blunt End Dispensing Tip, 25G, Fisnar Inc., Germantown, WI, USA). A standard line test [45] was performed to evaluate the printability of the bioink formulations with respect to print pressure (100, 150, and 200 kPa) and speed (5-40 mm/s). Immediately after each strut (or line) was printed, it was exposed to UV light (405 nm, 2 mW/cm 2 ) for 15 s to form a crosslinked hydrogel. Optical microscopy was used to capture the images of the struts, and ImageJ (NIH) was used to measure the strut width. A grid pattern (1 mm × 1 mm) was printed to evaluate the spatial uniformity of the printed struts. Here, the uniformity of the pores was investigated by drawing diagonal lines.
Characterization of Mechanical Behavior
The mechanical behavior of the composite hydrogels was evaluated using compression tests. Samples were fabricated in the form of disks (14 mm in diameter and 2 mm in height). Composite hydrogel samples were then weighted and soaked in PBS overnight to equilibrate their swelling. Samples were weighed again, and the swelling percentage of each sample was calculated using: where w f and w i represent the weight at equilibrium swelling and post-printing, respectively. The equilibrated samples were then used for compression test. For this purpose, a Kinexus Ultra+ rheometer was used to apply an increasing normal force from 0.05 N to 15 N [45,95]. The gap was recorded to calculate the strain.
3D Bioprinting of Composite Scaffolds
Cell-laden composite bioinks (3% MeALG with 1% bone particles) were printed on methacrylated glass slides [96] at optimized bioprinting parameters, such as 150-200 kPa at 20-30 mm/s for neat inks, and 150-230 kPa at 20-30 mm/s for composite inks. 3D scaffold designs were created by Autodesk ® Fusion 360™ and the 3D models were sliced with Slic3r in Repetier-Host to generate G-code files. A 2 mm × 2 mm grid scaffold composed of 4-layers (with a layer height of 150-µm and 500-µm offset between struts) was used for culture studies. Each bioprinted layer was partially crosslinked for 15 s to allow formation of a self-supporting layer (405 nm, 2 mW/cm 2 ). Bioprinted scaffolds were exposed to light for 1 min to finalize the bioprinting process. Bioprinted cell-laden scaffolds were immediately transferred into non-treated 6-well plates, and 5 mL of growth media was added into each well.
In Vitro Studies
Cell viability studies were performed on 3D bioprinted scaffolds (4-layer grid scaffolds) using a Live/Dead staining kit (Invitrogen, Thermo Fisher Scientific LLC, Asheville, NC, USA) at culture days 1, 4, 7, 14, 21, and 28. In this assay, live cells were stained with calcein-AM dye (green, 0.5 µL/mL), and ethidium homodimer (red, 2 µL/mL) was used for staining dead cells. A confocal laser scanning microscope (TCS SP8 MP, Leica Microsystems Inc., Buffalo Grove, IL, USA) was utilized to capture cell images. Three images per scaffold were taken and transferred to ImageJ (NIH, Public Domain, Bethesda, MD, USA) software to analyze the cell viability by counting the number of live and dead cells.
To evaluate the osteogenic differentiation of the hMSCs, alkaline phosphatase (ALP) activity and alizarin red (AR) assay, as well as osteocalcin (OC) immunostaining were performed. For this purpose, 3D bioprinted scaffolds were cultured in growth media for one day, followed by culturing in osteogenic differentiation media for up to 28 days. Osteogenic differentiation media was prepared using high-glucose DMEM (Gibco, Thermo Fisher Scientific LLC, Asheville, NC, USA) supplemented with 100 nM of dexamethasone (Sigma-Aldrich Inc., St. Louis, MO, USA), 37.5 µg/mL of L-ascorbic acid (Sigma-Aldrich Inc., St. Louis, MO, USA), 10 mM of β-glycerophosphate disodium salt hydrate (Sigma-Aldrich Inc., St. Louis, MO, USA), 10% fetal bovine serum (FBS) (Gibco), and 1% penicillinstreptomycin (Gibco, Thermo Fisher Scientific LLC, Asheville, NC, USA). ALP activity was evaluated using QuantiChrom™ Alkaline Phosphatase Assay Kit (ALP assay Kit, Bio-Assay Systems, Hayward, CA, USA). For this purpose, 3 scaffolds per condition were collected at the desired culture time. Collected scaffolds were lysed with 0.25% Triton X-100 in DI water overnight. Then, lysate samples were reacted by adding a working solution prepared according to the protocol provided by the supplier. A plate reader (Infinite M200 Pro, Tecan Inc., Morrisville, NC) was used to read the absorbance at 405 nm. For AR staining assay, the collected scaffolds were fixed in 70% ethanol for 2 h. After DI water wash (3×), cells were stained with the AR staining kit (Sigma, St. Louis, MO, USA) at 4 • C overnight. Scaffolds were then washed with DI water several times to remove extra AR stain from scaffolds. After pictures of the scaffolds were taken, scaffolds were incubated in 10% cetylpyridinium chloride (Sigma, St. Louis, MO, USA) in sodium phosphate buffer (10 mM, pH 7, Sigma) overnight to extract the AR stain from cells. The collected solutions were scanned by a plate reader to read the absorbance at 562 nm to quantify calcium deposition. For OC immunostaining, scaffolds were collected at 14 days of culture. Cells were fixed by 4% formaldehyde solution (Sigma-Aldrich Inc., St. Louis, MO, USA) for 25 min, permeabilized with 0.25% Triton X-100 (Sigma-Aldrich Inc., St. Louis, MO, USA) for 1 h, and subsequently incubated in a blocking solution (10% goat serum (Thermo Fisher Scientific LLC, Asheville, NC, USA) in PBS) for 3 h at room temperature. The OC primary antibody (1:200, monoclonal mouse, Invitrogen, Thermo Fisher Scientific LLC, Asheville, NC, USA) in staining solution (3% bovine serum albumin + 0.1% Tween-20 + 0.25% Triton X-100) was prepared and used as the primary staining of the cells. Cells were incubated in primary staining solution for 48 h at 4 • C. Later, cells were stained by Alexa Fluor 488 rabbit anti-mouse secondary antibody (1:100, Invitrogen, Thermo Fisher Scientific LLC, Asheville, NC, USA) in a staining solution for 24 h. In addition, phalloidin (rhodamine phalloidin, Invitrogen, Thermo Fisher Scientific LLC, Asheville, NC, USA) and DAPI (Thermo Fisher Scientific LLC, Asheville, NC, USA) were used to stain the cells for imaging F-actin and cell nuclei, respectively. Confocal and multiphoton microscopy (TCS SP8 MP, Leica Microsystems Inc., Buffalo Grove, IL, USA) is used to image the cells.
Statistics
The data were analyzed by Minitab software (Version:20.3.0, Minitab, LLC, State College, PA, USA) and presented as mean ± standard deviation for n ≥ 3 samples. The analysis of variance (ANOVA) with Tukey and a 95% level of confidence was used for comparison between conditions. A p-value < 0.05 was considered statistically significant.
Composite Bioink Formulation and Characterization
In this study, human allograft bone particle containing composite bioinks are developed for material extrusion-based 3D bioprinting, also known as direct ink writing (DIW), to bioprint cell-laden composite hydrogel scaffolds for bone tissue development ( Figure 1). Methacrylated alginate (MeALG), with~80% methacrylate (Me) functionalization, was synthesized and used as the photocurable hydrogel component of the composite bioink. Human allograft bone particles are mixed with MeALG and dissolved in PBS to form a composite ink formulation ( Figure 1). Based on our previous studies [45] and initial screening tests, the composition of the MeALG is set to 3% (w/v). Considering the cell-laden nature of the bioinks, bone particle concentration was limited to 1% (w/v). As received human bone allograft tissue (in the form of large chips) is pulverized up to 13 h to form uniform particles with~16 µm average particle size. Figure 2 shows the SEM image of the particles (Figure 2A) and the evolution of particle size distribution with grinding time ( Figure 2B). After 1 h of grinding, the average bone particle size is 189 µm, with a broader particle size distribution composed of a main peak (50-580 µm range) with a wide tail towards lower particle sizes (0.5-50 µm). The main particle distribution peak gradually shifts to lower particle sizes with increasing grinding time, and a sharper peak emerges at the range of 0.4 to 46 µm (base width) corresponding to a mode equal to 16.4 µm after 13 h grinding.
Composite Bioink Formulation and Characterization
In this study, human allograft bone particle containing composite bio oped for material extrusion-based 3D bioprinting, also known as direct ink to bioprint cell-laden composite hydrogel scaffolds for bone tissue develo 1). Methacrylated alginate (MeALG), with ~80% methacrylate (Me) function synthesized and used as the photocurable hydrogel component of the com Human allograft bone particles are mixed with MeALG and dissolved in composite ink formulation ( Figure 1). Based on our previous studies [ screening tests, the composition of the MeALG is set to 3% (w/v). Consid laden nature of the bioinks, bone particle concentration was limited to 1% ceived human bone allograft tissue (in the form of large chips) is pulverize form uniform particles with ~16 µ m average particle size. Figure 2 shows t of the particles (Figure 2A) and the evolution of particle size distribution time ( Figure 2B). After 1 h of grinding, the average bone particle size is 1 broader particle size distribution composed of a main peak (50-580 µ m rang tail towards lower particle sizes (0.5-50 µ m). The main particle distributio ally shifts to lower particle sizes with increasing grinding time, and a emerges at the range of 0.4 to 46 µ m (base width) corresponding to a mod µ m after 13 h grinding. The change in ink viscosity with shear rate for neat and composite inks are given in Figure 3A. The shear viscosity of the MeALG bioink significantly increased with the addition of bone particles, such that viscosity values at low shear rates (0.01 s −1 ), approaching to zero shear viscosity, increased from~7 to~30 Pa·s. Both ink formulations show shear thinning behavior indicated by the significant decrease in shear viscosity with increasing shear rate. Shear thinning behavior of the MeALG is known to be associated with chain entanglement [97], which resists ink flow at low shear rates. The degree of shear thinning is higher for composite bioinks as expected. This is due to the presence of particle-particle interactions, which enhances resistance to flow indicated by high zero shear viscosity values in filled polymer solutions [98]. Particle-particle interactions are destroyed at higher shear rates leading to a significant decrease in ink viscosity [99]. Figure 3B illustrates the change in elastic modulus (G ) and viscous modulus (G ) with increasing shear strain (0.5-300% at 1 Hz). Both ink formulations behaved like a liquid indicated by G > G , yet the difference between G and G significantly reduced for composite inks, supporting the observed increase in viscosity in Figure 3A. Frequency sweep tests (0.1-100 Hz at 0.05% strain) shown in Figure 3C indicate increasing G and G with increasing frequency. The frequency dependency of the G confirms the viscous behavior of the inks. The change in ink viscosity with shear rate for neat and composite ink Figure 3A. The shear viscosity of the MeALG bioink significantly increase dition of bone particles, such that viscosity values at low shear rates (0.01 s −1 ) to zero shear viscosity, increased from ~7 to ~30 Pa·s. Both ink formulation thinning behavior indicated by the significant decrease in shear viscosity w shear rate. Shear thinning behavior of the MeALG is known to be associat entanglement [97], which resists ink flow at low shear rates. The degree of s is higher for composite bioinks as expected. This is due to the presence of particle-particle interactions, which enhances resistance to flow indicated by high zero shear viscosity values in filled polymer solutions [98]. Particle-particle interactions are destroyed at higher shear rates leading to a significant decrease in ink viscosity [99]. Figure 3B illustrates the change in elastic modulus (G′) and viscous modulus (G″) with increasing shear strain (0.5-300% at 1 Hz). Both ink formulations behaved like a liquid indicated by G″ > G′, yet the difference between G′ and G″ significantly reduced for composite inks, supporting the observed increase in viscosity in Figure 3A. Frequency sweep tests (0.1-100 Hz at 0.05% strain) shown in Figure 3C indicate increasing G′ and G″ with increasing frequency. The frequency dependency of the G′ confirms the viscous behavior of the inks. MeALG solution forms crosslinked hydrogels via radical polymerization when exposed to light in the presence of a photoinitiator. The crosslinking density, and hence stiffness of the hydrogel can be controlled by the % methacrylation, initiator concentration and light exposure time [45,85]. To study the effect of particles on photocuring kinetics, we monitored the change in G′, G″ and phase angle (δ) under light exposure (Figure 4). Figure 4A shows the results for neat MeALG ink. Initially, the ink behaves like a liquid with G″ >> G′. When the ink is exposed to light, G′ increases significantly and becomes larger than G" due to start of the crosslinking reaction with a gel point (at 126 s) defined at G″ = G′. A significant drop in δ is also an indication of the gelation [100]. Both G″ and G′ reach to an equilibrium indicating the competition of the crosslinking reaction. The composite ink behaves similarly, and the gel point is similar (127 s). In summary, the presence of 1% bone particles does not affect the crosslinking behavior of the composite inks. MeALG solution forms crosslinked hydrogels via radical polymerization when exposed to light in the presence of a photoinitiator. The crosslinking density, and hence stiffness of the hydrogel can be controlled by the % methacrylation, initiator concentration and light exposure time [45,85]. To study the effect of particles on photocuring kinetics, we monitored the change in G , G and phase angle (δ) under light exposure (Figure 4). Figure 4A shows the results for neat MeALG ink. Initially, the ink behaves like a liquid with G >> G . When the ink is exposed to light, G increases significantly and becomes larger than G" due to start of the crosslinking reaction with a gel point (at 126 s) defined at G = G . A significant drop in δ is also an indication of the gelation [100]. Both G and G reach to an equilibrium indicating the competition of the crosslinking reaction. The composite ink behaves similarly, and the gel point is similar (127 s). In summary, the presence of 1% bone particles does not affect the crosslinking behavior of the composite inks.
Mechanical Properties
The mechanical behavior, stiffness, or Young's modulus (E) of the composite hydrogels are studies using compression tests ( Figure 5). Although G′ indicates the elastic modulus of the hydrogels (Figure 4), hydrogels should be equilibrated in PBS to determine the actual stiffness under in vitro culture conditions ( Figure 5A). Our results show that the % swelling decreases from 84% to 72% for composite hydrogels. In good agreement with the swelling results, E increases from 21 ± 6.2 kPa (for neat gel) to 51 ± 7.1 kPa (for composite gel). Note that covalently crosslinked MeALG is known to be stable under in vitro conditions; however, cell-mediated degradation can be achieved by using enzymatically degradable crosslinkers [85].
Mechanical Properties
The mechanical behavior, stiffness, or Young's modulus (E) of the composite hydrogels are studies using compression tests (Figure 5). Although G indicates the elastic modulus of the hydrogels (Figure 4), hydrogels should be equilibrated in PBS to determine the actual stiffness under in vitro culture conditions ( Figure 5A). Our results show that the % swelling decreases from 84% to 72% for composite hydrogels. In good agreement with the swelling results, E increases from 21 ± 6.2 kPa (for neat gel) to 51 ± 7.1 kPa (for composite gel). Note that covalently crosslinked MeALG is known to be stable under in vitro conditions; however, cell-mediated degradation can be achieved by using enzymatically degradable crosslinkers [85].
Mechanical Properties
The mechanical behavior, stiffness, or Young's modulus (E) of the composite hy gels are studies using compression tests ( Figure 5). Although G′ indicates the elastic ulus of the hydrogels (Figure 4), hydrogels should be equilibrated in PBS to determin actual stiffness under in vitro culture conditions ( Figure 5A). Our results show that t swelling decreases from 84% to 72% for composite hydrogels. In good agreement wit swelling results, E increases from 21 ± 6.2 kPa (for neat gel) to 51 ± 7.1 kPa (for comp gel). Note that covalently crosslinked MeALG is known to be stable under in vitro c tions; however, cell-mediated degradation can be achieved by using enzymaticall gradable crosslinkers [85].
Printability
Standard line test study is performed to investigate the printability of the nea composite MeALG. In general, print strut size (width) increased with increasing
Printability
Standard line test study is performed to investigate the printability of the neat and composite MeALG. In general, print strut size (width) increased with increasing print pressure at a constant print speed, whereas strut size decreased with increasing print speed at a constant print pressure ( Figure 6). We were able to print uniform struts with as low as 600 µm (at 100 kPa and 20 mm/s) and 700 µm (at 100 kPa and 30 mm/s) for composite and neat ink formulations. speed needs to be adjusted slightly to create the grid patterns with uniform struts and gap (or pore) (Figure 7). For both ink formulations, we observe collapse of pores at higher print pressures and low print speeds due to deposition of excess ink. For lower pressures and higher speeds, struts are more pronounced with clear definition of pores, such that circular pores become squares ( Figure 7C,D). Pluronic is also used as a control as it is known to be easily printed to form self-supporting structures [48]. The diagonal line of the square shape gap is measured to determine print quality [101]. Our results show that the length of the diagonal line decreases with increasing print pressure and the square shape converges to a rounded shape (Figure 8). Here, we would like to note that square shape is preserved at higher speeds, yet the printed struts become thinner, making them faster to dry out prior to completion of the print job, potentially leading to cell death as discussed below. Therefore, it is necessary to compromise the perfect square pores for rounded pores to achieve bioprinting reproducible scaffolds with controlled shape and high cell viability. Grid patterns are also printed to confirm the printability of both inks. Here, the print speed needs to be adjusted slightly to create the grid patterns with uniform struts and gap (or pore) (Figure 7). For both ink formulations, we observe collapse of pores at higher print pressures and low print speeds due to deposition of excess ink. For lower pressures and higher speeds, struts are more pronounced with clear definition of pores, such that circular pores become squares ( Figure 7C,D). Pluronic is also used as a control as it is known to be easily printed to form self-supporting structures [48]. The diagonal line of the square shape gap is measured to determine print quality [101]. Our results show that the length of the diagonal line decreases with increasing print pressure and the square shape converges to a rounded shape (Figure 8). Here, we would like to note that square shape is preserved at higher speeds, yet the printed struts become thinner, making them faster to dry out prior to completion of the print job, potentially leading to cell death as discussed below. Therefore, it is necessary to compromise the perfect square pores for rounded pores to achieve bioprinting reproducible scaffolds with controlled shape and high cell viability.
In Vitro Studies
In this section, hMSC viability ( Figure 9) and differentiation ( Figure 10) results are presented for culture times up to 28 days. It is well known that MeALG is blank to cells as it does not contain inherent bioactivity toward cells and needs to be functionalized with bioactive cues to promote specific biological responses. MeALG is usually modified with integrin binding arginine-glycine-aspartic acid (RGD)-peptides to promote cell survival for matrix tethering cells, such as the adult stem cells used in this study [85,94,96]. Methacrylate (Me) pendant groups allow chemical tethering of bioactive molecules containing cysteine groups mainly through a Michael-type addition reaction. Here, we functionalized MeALG with RGD-peptide following the protocol developed previously to enhance stem cell-matrix adhesion [85,94,96].
In Vitro Studies
In this section, hMSC viability ( Figure 9) and differentiation ( Figure 10) results are presented for culture times up to 28 days. It is well known that MeALG is blank to cells as it does not contain inherent bioactivity toward cells and needs to be functionalized with bioactive cues to promote specific biological responses. MeALG is usually modified with integrin binding arginine-glycine-aspartic acid (RGD)-peptides to promote cell survival for matrix tethering cells, such as the adult stem cells used in this study [85,94,96]. Methacrylate (Me) pendant groups allow chemical tethering of bioactive molecules containing cysteine groups mainly through a Michael-type addition reaction. Here, we functionalized MeALG with RGD-peptide following the protocol developed previously to enhance stem cell-matrix adhesion [85,94,96]. Our results show that % cell viability is ≥89% for all time points, except for neat scaffolds at day 1 (85 ± 2%), yet the data is not significantly different than that of composite day 1 (89 ± 0.1%) ( Figure 9B). We observed a slight increase at day 21 (97 ± 0.1% (N) and 96 ± 0.1% (C)) and day 28 (91 ± 0.1% (N) and 93 ± 0.1% (C)), which could be due to delayed proliferation within scaffolds. Bioprinted scaffolds show a slight decrease in thickness with culture time which is attributed to detachment of the scaffolds from the glass slide as observed visually during culture. We believe that this led to the collapse of some layers or breakage of the sample for prolonged culture times.
Osteogenic differentiation studies are performed up to 28 days of culture, using neat and composite scaffolds, in osteogenic media, and growth media condition is used as a control. Our results show that ALP activity (normalized activity with respect to activity recorded at culture day 1) increases significantly with culture time for both neat and composite scaffolds ( Figure 10A). For each culture day, ALP activity is significantly higher for the composite scaffolds, indicating significantly enhanced osteogenic differentiation in the presence of bone particles. Note that hMSCs express ALP without differentiation, and it should not be used as a sole indicator of osteogenic differentiation. Normalized ALP activity is very low in growth media and does not change with culture time for neat scaffolds, whereas the activity increased with culture time for composite scaffolds such that Our results show that % cell viability is ≥89% for all time points, except for neat scaffolds at day 1 (85 ± 2%), yet the data is not significantly different than that of composite day 1 (89 ± 0.1%) ( Figure 9B). We observed a slight increase at day 21 (97 ± 0.1% (N) and 96 ± 0.1% (C)) and day 28 (91 ± 0.1% (N) and 93 ± 0.1% (C)), which could be due to delayed proliferation within scaffolds. Bioprinted scaffolds show a slight decrease in thickness with culture time which is attributed to detachment of the scaffolds from the glass slide as observed visually during culture. We believe that this led to the collapse of some layers or breakage of the sample for prolonged culture times.
Osteogenic differentiation studies are performed up to 28 days of culture, using neat and composite scaffolds, in osteogenic media, and growth media condition is used as a control. Our results show that ALP activity (normalized activity with respect to activity recorded at culture day 1) increases significantly with culture time for both neat and composite scaffolds ( Figure 10A). For each culture day, ALP activity is significantly higher for the composite scaffolds, indicating significantly enhanced osteogenic differentiation in the presence of bone particles. Note that hMSCs express ALP without differentiation, and it should not be used as a sole indicator of osteogenic differentiation. Normalized ALP activity is very low in growth media and does not change with culture time for neat scaffolds, whereas the activity increased with culture time for composite scaffolds such that 3.5× increase is recorded at day 28 (as compared to day 1). This is much smaller than the activity observed at day 28 (48×) and even at day 7 (9×) in differentiation media. The majority of the hMSCs (~95%) stained positive for osteocalcin (OC) within composite scaffolds as compared to neat scaffolds (~70%) ( Figure 10B). No staining is observed for samples cultured in growth media (results not shown). Alizarin red (AR) staining is used to evaluate calcium deposition, which is an indicator for osteogenic differentiation of hMSCs. AR staining for composite scaffolds is significantly darker as compared to neat scaffolds ( Figure 10C-D). Some (significantly dim in color) AR staining is observed for composite scaffolds cultured in growth media indicating some calcium deposition. In addition, AR assay is performed to quantitatively measure AR activity ( Figure 10E). Confirming our qualitative assessments, AR expression is significantly higher for composite scaffolds when compared with neat scaffolds for both growth and OC media, with significantly higher AR expression in differentiation media. These results are in good agreement with ALP activity and OC staining results. Overall, our results clearly indicate that the presence of bone particles significantly enhanced osteogenic differentiation of hMSCs within bioprinted composite scaffolds. 3.5× increase is recorded at day 28 (as compared to day 1). This is much smaller than the activity observed at day 28 (48×) and even at day 7 (9×) in differentiation media. The majority of the hMSCs (~95%) stained positive for osteocalcin (OC) within composite scaffolds as compared to neat scaffolds (~70%) ( Figure 10B). No staining is observed for samples cultured in growth media (results not shown). Alizarin red (AR) staining is used to evaluate calcium deposition, which is an indicator for osteogenic differentiation of hMSCs. AR staining for composite scaffolds is significantly darker as compared to neat scaffolds ( Figure 10C-D). Some (significantly dim in color) AR staining is observed for composite scaffolds cultured in growth media indicating some calcium deposition. In addition, AR assay is performed to quantitatively measure AR activity ( Figure 10E). Confirming our qualitative assessments, AR expression is significantly higher for composite scaffolds when compared with neat scaffolds for both growth and OC media, with significantly higher AR expression in differentiation media. These results are in good agreement with ALP activity and OC staining results. Overall, our results clearly indicate that the presence of bone particles significantly enhanced osteogenic differentiation of hMSCs within bioprinted composite scaffolds. Decellularized human bone particles sustain the bioactivity of the native human bone tissue including biominerals as compared to commonly used digested bone tissue, which requires decellularization and demineralization [58]. Thus, combining human bone particles with a photocurable hydrogel is a novel approach to formulate bone mimetic bioinks for extrusion-based bioprinting to fabricate scaffolds for bone tissue engineering. In this Decellularized human bone particles sustain the bioactivity of the native human bone tissue including biominerals as compared to commonly used digested bone tissue, which requires decellularization and demineralization [58]. Thus, combining human bone particles with a photocurable hydrogel is a novel approach to formulate bone mimetic bioinks for extrusion-based bioprinting to fabricate scaffolds for bone tissue engineering. In this study, we report a visible light curable composite bioink formulation composed of 3% (w/v) MeALG and 1% (w/v) bone particles. Although bioceramics are commonly used to fabricate hydrogel-based composite bone inks, these only target the biomineral component and lack the ECM of the bone tissue. Composite bone inks focusing on human bone allograft particles provide a more complete bone mimetic microenvironment, yet they are rarely reported in the literature. Compared to a similar approach reported previously, which utilized significantly higher concentrations of MeALG (5-15%, w/v) and bone particles (10-75% w/v) [69], we are able to formulate bone mimetic inks with significantly lower amounts of ingredients providing a much more feasible path for clinical applications-considering the cost and availability of the ingredients. Our current study clearly shows that bone particles can be used as a reinforcement to adjust the rheology of the bioink and, hence, printability, as well as to enhance the mechanical properties of the bioprinted hydrogels. Note that covalently crosslinked MeALG hydrogels are stable within the time frame of the in vitro experiments and should not be degrading in the presence of cells, yet degradation can be included by using an enzymatically degradable peptide crosslinker [85]. Our results show that bone particles do not interfere with the photocrosslinking step making it easy to bioprinting and potentially use it as an injectable formulation. Most importantly, the presence of bone particles significantly enhances hMSC differentiation towards osteogenic phenotype, such that hMSCs within composite hydrogels show significantly higher ALP activity, AR staining and activity, and osteocalcin staining as compared to the cells within neat hydrogels.
Conclusions
In this study, we report a novel photocurable composite bioink composed of methacrylated alginate with human bone allograft particles for 3D bioprinting of bone scaffolds. Composite inks show higher low shear viscosity but enhanced shear thinning due to presence of bone particles. Incorporation of bone particles leads to a decrease in swelling and an increase in stiffness of the composite hydrogels as compared to neat hydrogels. Standard line tests and grid patterns are used to optimize the print pressure and speed to fabricate uniform scaffolds. In vitro culture studies up to 28 days reveal high cell viability (~90%) for hMSCs when bioprinted within neat or composite bioinks. Differentiation studies confirm significantly high alkaline phosphatase activity, calcium deposition, and osteocalcin expression for hMSCs within bioprinted composite hydrogels as compared to neat hydrogels. Overall, our results confirm that our composite bioinks have a significant potential to create scaffolds for bone tissue regeneration. Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 8,832.6 | 2022-09-01T00:00:00.000 | [
"Biology",
"Materials Science",
"Engineering"
] |
Carbon defects as sources of the green and yellow luminescence bands in undoped GaN
In high-purity GaN grown by hydride vapor phase epitaxy, the commonly observed yellow luminescence (YL) band gives way to a green luminescence (GL) band at high excitation intensity. We propose that the GL band with a maximum at 2.4 eV is caused by transitions of electrons from the conduction band to the 0 / + level of the isolated C N defect. The YL band, related to transitions via the − / 0 level of the same defect, has a maximum at 2.1 eV and can be observed only for some high-purity samples. However, in less pure GaN samples, where no GL band is observed, another YL band with a maximum at 2.2 eV dominates the photoluminescence spectrum. The latter is attributed to the C N O N complex.
I. INTRODUCTION
Gallium nitride (GaN) is a promising material for high-power/high-frequency electronics [1][2][3][4].In particular, thick GaN layers on sapphire substrates and freestanding GaN, grown by hydride vapor phase epitaxy (HVPE), are expected to have very high breakdown voltage due to a low density of defects in this material.In photoluminescence (PL) studies of high-quality freestanding GaN grown by HVPE, the yellow luminescence (YL) band with a maximum at about 2.2 eV and the green luminescence (GL) band with a maximum at about 2.4 eV are the dominant defect-related PL bands [5].Previously, the YL and GL bands were attributed to transitions of electrons from the conduction band to the 2−/− and −/0 transition levels, respectively, of the gallium vacancy-oxygen (V Ga O N ) complex [6].However, according to recent calculations using hybrid functionals, the PL band caused by transitions of electrons from the conduction band to the 2−/− level of V Ga O N is expected to have a maximum at 1.4 eV; i.e., in the infrared region [7].Moreover, the exponential decay of the GL band at low temperatures was explained with the assumption that the GL band is caused by transitions of electrons from an excited state, located very close to the conduction band minimum (CBM), to the −/0 level of the V Ga O N acceptor [5].However, such an assumption is not well justified.Indeed, an excited state close to the conduction band is possible for a positively charged deep donor, whereas the V Ga O N acceptor does not have donorlike excited states.Thus, a revision of the attribution for the GL band in GaN is needed.
Regarding the YL band, two assignments have been recently suggested based on modern first-principles calculations.Lyons et al. [8] attributed the YL band to the C N defect, whereas Demchenko et al. [7] proposed that the YL band is caused by the C N O N complex.The C N O N complex is a deep donor with the 0/+ level at 0.75 eV above the valence band maximum (VBM).The C N defect in GaN is a deep acceptor with the −/0 level at 0.9-1.1 eV above the VBM [7,8].The schematic band diagram, including these thermodynamic transition levels, is shown in Fig. 1.
In addition to the acceptor −/0 level of C N , calculations have predicted the existence of the 0/+ level for this defect at 0.43 eV above the VBM [7].Since the −/0 level of C N and the 0/+ level of C N O N have similar energies, it is possible that both the C N and C N O N defects produce YL bands with similar shapes and positions (Fig. 1).However, due to the difference in their electronic structure, these defects can be distinguished through the study of the effect of excitation intensity on the PL spectrum.Indeed, it is expected that the C N acceptors in n-type GaN can be saturated with holes (causing saturation of the YL band intensity), and at higher excitation intensities the defects will begin to capture an additional hole.Subsequently, transitions of electrons from the conduction band to the 0/+ level of C N will cause a "secondary" PL band, which peaks at higher photon energies (Fig. 1).Lyons et al. [9] calculated that optical transitions of electrons from the conduction band to this level would cause a blue band with a maximum at about 2.7 eV.These authors noticed that in C-doped GaN, a blue band is often observed when the carbon concentration is high [10,11], or at high excitation intensity [12,13], which, in their opinion, supported the existence of the 0/+ level of the C N defect.In contrast, there is only one optically active transition level for the C N O N complex in the band gap of GaN.A second level, +/2+ is predicted to be very close to the valence band (Fig. 1) and will act as a repulsive center for holes.Therefore, the saturation of the C N O N -related YL band will not be followed by the emergence of another PL band Calculations predict that the C N O N complex forms the 0/+ transition level in the band gap, while C N forms two transition levels: (−/0 and 0/+.The C N O N complex is expected to generate only the YL band.The C N defect can generate the YL band and an additional, higher energy band after the YL is saturated.at higher photon energies.This important distinction between the two defects should allow reliable attribution of the YL band to either C N or C N O N , depending on the existence of the secondary band. In this paper, we investigated the PL behavior for a number of GaN samples.We arrived at the conclusion that the GL band with a maximum at 2.4 eV, and not the blue band, is caused by electron transitions via the 0/+ level of the C N defect.The PL bands associated with the C N and C N O N defects can both be called YL bands since they have only slightly different positions of their maxima.It appears that in a majority of GaN samples, the C N O N complex is responsible for the YL band.The C N defect can be revealed through the observation of the secondary (GL) band only in high-quality GaN grown by the HVPE technique.In this paper, time-resolved PL experiments have been employed to identify the GL band due to its short lifetime and exponential decay even at low temperatures (30-100 K).
A. Experimental details
We observed the GL band (at least in time-resolved PL measurements) in more than 20 samples, which were 10to 30-μm-thick unintentionally doped GaN layers grown by HVPE on c plane, 2 in.sapphire substrates.The growth was performed at temperatures of 850-1000 °C, in atmospheric pressure and in an argon ambient.Ammonia and HCl were used as the precursors.The growth rate (0.2-1 μm/min) was controlled by the HCl gas flow rate through the Ga source.For a more detailed study, two representative samples were selected: 1007 and RS280.The concentration of free electrons, n, given in Table I for these two samples, is calculated from the PL lifetime of the YL band at room temperature, according to a model presented in Ref. [14], while the direct Hall effect measurements showed an apparent concentration of about three times higher due to the existence of a highly conductive layer near the GaN/sapphire interface [15].
An additional HVPE-grown sample in this paper was undoped freestanding GaN produced at the Samsung Advanced Institute of Technology (sample B73).The HVPE samples were compared to GaN samples grown by metal-organic chemical vapor deposition (MOCVD) on sapphire substrates, in which no traces of the GL band could be found (samples EM1256, EM6881, EM7169, and EM7049) [16,17].
The concentration of C and O impurities ([C] and [O], respectively) were estimated from secondary ion massspectrometry (SIMS) measurements and are given in Table I.The carbon concentration in the MOCVD-grown samples was varied in the range of [C] = 4 × 10 16 − 2 × 10 17 cm −3 by changing growth conditions.Samples EM6881, EM7169, and EM7049 are semi-insulating, and only the lower bound for their resistivity can be estimated.
Steady-state PL was excited with an unfocused He-Cd laser (50 mW, 325 nm), dispersed by a 1200 rules/mm grating in a 0.3 m monochromator and detected by a cooled photomultiplier tube.Calibrated neutral-density filters were used to attenuate the excitation power density (P exc ) over the range 10 −5 -0.2 W/cm 2 , while a focused beam with a diameter of 0.2 mm was used to obtain P exc up to 100 W/cm 2 .Time-resolved PL was excited with a nitrogen pulse laser (1 ns pulses, with a repetition rate of 6 Hz, and 337 nm wavelength).The excitation light intensity at the sample surface, P 0 , was varied between 10 18 and 10 24 cm −2 s −1 using neutral density filters.A closed-cycle optical cryostat was used for temperatures between 15 and 320 K.The absolute internal quantum efficiency of PL η is defined as η = I PL /G, where I PL is the integrated PL intensity from a particular PL band and G is the concentration of electron-hole pairs created by the laser per second in the same volume.To find η for a particular PL band, we compared its integrated intensity with the PL intensity obtained from a calibrated GaN sample [18,19].All samples were studied under identical conditions.
B. Theoretical details
To find the transition levels for the C N and C N O N defects, we performed Heyd-Scuseria-Ernzerhof (HSE) hybrid functional [20] calculations.As in a previous paper [7], we adjusted the fraction of exact exchange to 0.312 and the screening parameter to 0.2 Å−1 .The computed band gap of 3.50 eV agrees with the low-temperature experimental value of 3.50 eV, and the computed relaxed lattice parameters for wurtzite GaN (a = 3.210 Å, c = 5.198 Å, and u = 0.377) also agree with the experimental values (a = 3.189 Å and c = 5.185 Å) [21].The 128 atom supercells were used with all internal degrees of freedom relaxed using HSE hybrid functional calculations to result in forces of 0.05 eV/ Å or less.The plane-wave basis sets with a 400 eV cutoff at the point were used in all calculations.Spin-polarized calculations were performed in all cases.The details of the calculation methods and corrections to defect energies can be found in Ref. [7].
It is necessary to mention a practical issue regarding the potential alignment V correction, which originates from dropping the diverging G = 0 term in the Fourier energy expansion in a charged supercell [22].This correction is relatively small (0.05 to 0.1 eV) and is proportional to the defect charge.In this paper, we compared the results of calculations with experimental measurements and analyzed the effect this correction has on the computed properties of different types of defects (e.g., isolated defects vs complexes).Although there are no significant changes to the final results, it appears that the potential alignment corrections slightly worsen the results for the isolated defect C N , but slightly improve the results for C N O N complex.This may indicate that isolated defects and defect complexes have different electric dipole properties.Therefore, different corrections accounting for the artificial dipole interactions in periodic supercells may have to be applied for the two types of defects.In this study, V was applied only to the C N O N complex, which improved the results by up to 0.1 eV.Further study is needed to clarify this behavior.
A. Yellow and blue luminescence bands in GaN grown by MOCVD
The YL band in GaN grown by MOCVD has a maximum at 2.2 eV (Figs. 2 and 3).In conductive n-type GaN (sample EM1256), the YL band intensity begins to saturate at excitation intensities P exc > 10 −5 W/cm 2 (Fig. 2).The shape of the YL band I PL ( ω) and position of its maximum ω max remained unchanged for excitation intensities up to 0.2 W/cm 2 .The shape of the YL band at low temperature can be modeled with the following formula derived from a one-dimensional configuration coordinate model [23] where S e and are the Huang-Rhys factor and the dominant phonon energy for the excited state, ω is the photon energy, and E 0 is the zero-phonon line (ZPL) energy.The values of , E 0 , and ω max (52 are typical for the YL band in GaN [5].Note that no other luminescence bands can be found in the range from 2.6 to 3.2 eV for the sample with a low concentration of carbon (Fig. 2).
For semi-insulating GaN, a broad band can be found in the blue spectral region only in some samples (Fig. 3) [16].The band has a maximum at 3.03 eV and is identified as the BL2 band, sometimes observed in high-resistivity GaN grown by MOCVD [24,25].It has a characteristic fine structure, with the ZPL at 3.333 eV (inset in Fig. 3).In the past, the BL2 FIG. 3. (Color online) Low-temperature PL spectra at P exc = 10 −4 W/cm 2 from semi-insulating GaN samples grown by MOCVD.The PL intensity is divided by the excitation intensity.The concentration of carbon [C] is given in units of 10 17 cm −3 .The inset shows a high resolution of the region near the ZPL of the BL2 band.
band has been attributed to transitions of electrons from the conduction band (or from a state very close to the conduction band) to a defect level located at 0.15 eV above the VBM [24].An important feature of the BL2 band is that it bleaches during continuous ultraviolet (UV) illumination, indicating unstable behavior [25].It was suggested previously that the BL2 band is associated with some defect complex containing hydrogen, and the bleaching is caused by dissociation of this complex under UV exposure [5,25].It is unclear why the BL2 band is present only in some semi-insulating GaN samples and cannot be found in others (Fig. 3) and why the YL intensity varies considerably in different samples.Further studies are needed to clarify this issue.
Another blue band (labeled BL) peaking at 2.9 eV (not shown here), which is related to the Zn Ga acceptor, can be observed in semi-insulating GaN, when the BL2 band is quenched and disappears at temperatures above 150 K [25].Both the BL and BL2 bands apparently have no relation to the isolated C N defect.On the other hand, the GL band with a maximum at 2.4 eV is a good candidate for transitions via the 0/+ level of the C N defect, as will be discussed below.
Shape of the GL band
The GL band with a maximum at about 2.4 eV appears under high excitation intensity in the steady-state PL spectrum of high-quality GaN samples grown by the HVPE method.The nearly quadratic dependence of the GL intensity on P exc in n-type GaN is a strong indication that the defect responsible for this band captures two holes before any radiative recombination takes place [6].However, it is often difficult to resolve the GL band in steady-state PL spectra, because it usually overlaps with the YL band at room temperature and with the UV luminescence (UVL) band or BL band at low temperatures.In this paper, we extracted the shape of the GL band from time-resolved PL measurements, because in these experiments the GL band can be isolated due to its PL lifetime being significantly shorter than that for other defect-related PL bands [26].Figure 4 shows the PL spectrum taken 1 μs after pulsed excitation at various temperatures.The GL band has a maximum at 2.40 eV and a full width at half-maximum (FWHM) of 0.43 eV at low temperatures.For a very wide range of PL intensities (almost three orders of magnitude), the shape of the GL band can be fit with Eq. ( 1), as shown in Fig. 4. The PL band shape remains unchanged for different excitation intensities and for different time delays.
At temperatures below 100 K, the shape of the GL band is asymmetric, as is typically observed for defects with moderate electron-phonon coupling.By employing a one-dimensional configuration-coordinate model [5,27], we have estimated the energy of the dominant phonon mode in the excited state as = 41 ± 5 meV from the analysis of the temperature dependence of the GL band FWHM as the temperature was increased from 30 to 300 K. Other parameters in Eq. (1) for the GL band were estimated as S e = 8.5, E 0 = 2.9 eV, and ω max = 2.4 eV from the best fit with the experimental data at T = 30 and 100 K (Fig. 4).The above-determined parameters, which describe the shapes and positions of the YL and GL bands, will be used in the following sections to simulate the FIG. 4. (Color online) Normalized PL spectra at 10 −6 s after a laser pulse with P 0 = 5 × 10 23 cm −2 s −1 from freestanding GaN (sample B73).The symbols are experimental points, and the solid curve is calculated using Eq. ( 1) with the following parameters: S e = 8.5; E 0 = 2.9; ω max = 2.4 eV; and = 41 meV.
shapes of the YL and GL bands and to resolve them in those PL spectra where they overlap with each other and other PL bands.
Effect of temperature
While the PL from freestanding GaN containing the GL band has been previously studied at temperatures below 300 K [5], we are not aware of any reports on its behavior for higher temperatures.Figure 5 shows the PL spectra from sample B73 for select temperatures.At 300 K, a broad band with a maximum at 2.37 eV consists of two overlapping YL and 2) for PL intensity and Eq. ( 3) for PL lifetime with the following parameters: GL bands.The comparison of the steady-state PL spectrum with the time-resolved PL spectrum for the GL band (shown with triangles in Fig. 5) indicates that the contribution from the YL band in the steady-state PL spectrum is small at 300 K (the YL band is not present in the time-resolved PL spectrum due to its long lifetime).The deconvolution of the broad band into the YL and GL bands with known shapes (not shown in Fig. 5) indicates that, as the temperature is increased from 300 to 380 K, the YL band intensity is nearly constant, whereas the intensity of the GL band decreases and it finally disappears at T ≈ 400 K.At temperatures between 380 and 560 K, the defect-related PL band maximum gradually shifts from 2.10 to 1.93 eV.It is not clear whether the broad, red-yellow band consists of two bands associated with two different defects, or if the band is related to a single defect and the band redshifts due to the decreasing band gap with increasing temperature.The near-band-edge (NBE) peak shifts from 3.37 to 3.27 eV as the temperature is increased from 380 to 560 K.
The temperature dependences of the GL band intensity and the GL lifetime are shown in Fig. 6.The dependences are very similar to each other and can be fit with the following expressions [14] where I 0 , η 0 , and τ 0 are the PL intensity, quantum efficiency, and PL lifetime, respectively, for the GL band before PL quenching begins (at about 290 K), C p is the hole-capture coefficient for the defect state responsible for the GL band, N v is the effective density of states in the valence band, E A is the energy distance between the VBM and the defect state responsible for the GL band, and g is its degeneracy.From the fit, shown with a solid line in Fig. 6, we have estimated that E A ≈ 0.54 eV and C p ≈ 10 −6 cm 3 /s.
C. Yellow and green luminescence bands in GaN layers grown by HVPE
Detailed studies of the effect of excitation intensity on PL were conducted at T = 100 K.This temperature was chosen because at lower temperatures, the exciton emission is very strong and may contribute as a parasitic signal at photon energies where the defect-related bands were observed.On the other hand, the temperature-induced broadening of the GL band can be ignored at 100 K, as can be seen in Fig. 4. The contribution of the GL band in the PL spectrum and its PL lifetime varied from sample to sample, apparently due to different concentrations of point defects and free electrons in the set of 20 undoped GaN samples (layers on sapphire) grown by HVPE.
Below, the results are presented for two representative samples: a high-purity sample with the strongest GL band (sample RS280) and a less pure sample in which the YL band was stronger and the GL band could be observed only in time-resolved PL measurements (sample 1007).To evaluate the presence of carbon, oxygen, and a few other impurities, dedicated SIMS measurements for samples RS280 and 1007 have been carried out by the Evans Analytical Group.In this analysis, a very low detection limit for carbon was achieved in vacuum by removing carbon adsorbed at the surface.The reduction in surface carbon resulted in less interference (and thus a lower background/detection limit) during the SIMS measurement of the underlying GaN region.The concentrations of selected impurity species in the two samples are given in Table II.The data for the depths between 2 and 4 μm represents the bulk part of the GaN layers.To compare with PL results, we will limit the analysis to the depth range of 100 to 400 nm, because the concentration of photogenerated charge carriers in our PL experiments is negligible beyond 400 nm, while the SIMS data in the first 100 nm can be affected by impurities absorbed at the surface and can be unrealistic.
High-purity GaN (strong GL and weak YL)
The steady-state PL spectra for selected excitation intensities and T = 100 K for sample RS280 are shown in Fig. 7.In the PL spectra, the NBE emission has a main peak at 3.485 eV and is attributed to the annihilation of free excitons.At low excitation intensities, three defect-related PL bands can be resolved: the UVL band with a main peak at 3.30 eV, the Zn-related BL band with a maximum at 2.94 eV, and the red luminescence (RL) band with a maximum at 1.82 eV.The RL band in undoped GaN grown by HVPE is preliminarily attributed to a deep acceptor with an energy level of 1.13 eV above the valence band [28].With increasing excitation intensity, the relative contribution of the UVL, BL, and RL bands decreases, and the GL band with a maximum at 2.40 eV emerges (Fig. 7).By subtracting the shapes of the RL and BL bands (obtained from the PL spectrum at the lowest excitation intensity) from the PL spectrum measured at P exc = 0.2 W/cm 2 , we obtained a shape of a PL band (the dotted curve in Fig. 7) almost identical to the shape of the GL band found from the time-resolved PL in freestanding GaN (Fig. 4).By repeating this procedure for different excitation intensities, we obtained the dependence of the GL intensity on the excitation intensity (Fig. 8).We can see that at P exc < 10 −3 W/cm 2 the GL intensity increases as approximately a square of P exc .This behavior is expected for optical transitions via the lower transition level of C N (the 0/+ level) for excitation intensities when the higher transition level (the −/0 level) is not yet saturated with holes (the YL intensity increases linearly with P exc ) [6], because the probability of capturing two holes by a defect before recombination occurs is proportional to the square of the excitation intensity.Between 10 −3 and 1 W/cm 2 , the GL band intensity increases linearly with P exc , because one hole is already bound to the defect and only one additional hole needs to be captured.The YL band is expected to be saturated under these conditions.Finally, at P exc > 1 W/cm 2 the GL band saturates.The saturation of the RL and BL bands begins at much lower excitation intensities (at P exc > 10 −3 W/cm 2 ).The solid lines in Fig. 8 are calculated with the following expression, which is obtained from rate equations [29,30] FIG. 7. (Color online) Steady-state PL spectra for sample RS280 at 100 K and selected excitation intensities.The dotted line shows the PL band obtained as a difference between the PL spectrum at 0.13 mW/cm 2 (multiplied by 36) and the PL spectrum at 200 mW/cm 2 .Triangles show the shape of the GL band calculated using Eq. ( 1), with parameters given in Fig. 4.
Here, P 0 is the excitation intensity (expressed as the number of photons passing through a unit area of the sample surface per unit time), N is the concentration of defects responsible for a particular PL band, η 0 is the quantum efficiency of that PL band in the limit of low excitation intensity, and α is the absorption coefficient (α ≈ 1.2 × 10 5 cm −1 for GaN at 3.81 eV [31]).The only fitting parameter in Eq. ( 4) is N .From the best fit, we have estimated that the concentrations of defects responsible for the GL, RL, and BL bands are 1.5 × 10 15 , 1.5 × 10 16 , and 2 × 10 15 cm −3 , respectively.
The YL band was not observed in the steady-state PL spectra of this sample, because it is obscured by the RL band.FIG. 8. (Color online) Dependence of the PL intensity on excitation intensity for the major PL bands in steady-state PL measurements for sample RS280 at 100 K.The experimental points for the GL band are obtained from the PL spectra deconvolution similar to an example shown in Fig. 7 but use the PL spectrum at lower excitation intensity of 10 −5 W/cm 2 as the "background" spectrum.The solid lines are calculated using Eq. ( 4), with the following parameters: η 0 = 0.0018; τ 0 = 4.5 μs; and N = 1.5 × 10 15 cm −3 for the GL band; η 0 = 0.025; τ 0 = 6 ms; and N = 1.5 × 10 16 cm −3 for the RL band; and η 0 = 0.035; τ 0 = 500 μs; and N = 2 × 10 15 cm −3 for the BL band.The dashed line indicates the quadratic dependence of PL intensity on excitation intensity.The dotted line is calculated using Eq. ( 4), with η 0 = 0.0018; τ 0 = 2 ms; and N = 1.5 × 10 15 cm −3 , which is the expected dependence for the YL band.To estimate the contribution of the C N -related YL band to the PL spectra at different excitation intensities, we assumed that the hole-capture coefficients for the −/0 and 0/+ levels of the C N defect (for the YL and GL bands, respectively) are similar.
The expected intensity of the YL band is shown with a dotted curve in Fig. 8.Note that if the hole capture coefficient for the −/0 level is higher, the intensity of the YL will shift upward by the same factor.While the YL band with such intensity could not be observed in the steady-state PL spectrum at any excitation intensity, it can be revealed in time-resolved PL measurements.
Figure 9 shows the evolution of the PL spectrum for the same sample after a pulsed excitation at T = 100 K.For short time delays (up to 10 −5 s), the GL band is the dominant band in the defect-related part of the spectrum.For longer time delays, the GL band disappears, and the YL band with a maximum at 2.1 eV emerges (Fig. 9).The shape of the YL band in Fig. 9 is fit using Eq. ( 1) with the same parameters, and S e , as for the YL band in Fig. 2.However, ω max = 2.1 eV and E 0 = 2.57 eV had to be taken for the best fit in Fig. 9 instead of 2.2 and 2.64 eV, respectively, which were used in Fig. 2.An example with the latter parameters is shown in Fig. 9 as the dashed curve 4 .This indicates that these two YL bands originate from two different sources.At longer delay times (longer than 10 −3 s), the YL band gradually disappears as it is obscured by the RL band.The BL, GL, YL, and RL bands can be reliably observed in time-resolved PL spectra because they have different PL lifetimes at 100 K: 500 μs (BL band), 4.5 μs (GL band), 2 ms (YL band), and ∼6 ms (RL band).
The dependences of the peak intensity on the excitation intensity for the GL, YL, and UVL bands at 100 K are FIG.10. (Color online) The dependences of PL intensity after a laser pulse on excitation intensity for PL bands in sample RS280 at 100 K.The solid lines are calculated using Eq. ( 37) of Ref. [30], with the following parameters: η 0 = 0.0018; τ 0 = 4.5 μs; and N = 5 × 10 14 cm −3 for the GL band; η 0 = 0.0018; τ 0 = 2 ms; and N = 5 × 10 14 cm −3 for the YL band; and η 0 = 0.004; τ 0 = 100 μs; and N = 5 × 10 14 cm −3 for the UVL band.The dashed line indicates the quadratic dependence of PL intensity on the excitation intensity.
shown in Fig. 10.The intensities of the UVL and YL bands increase linearly with increasing excitation intensity up to ∼10 22 cm −2 s −1 and saturate at higher excitation intensities.However, the intensity of the GL band increases as a square of the excitation intensity for P 0 < 10 21 cm −2 s −1 (dashed line in Fig. 10).These results are consistent with those obtained from steady-state PL, suggesting that two holes are captured by the defect when producing the GL band, while only one hole is captured by the same defect when producing the YL band.
Figures 11 and 12 show evolutions of the PL spectra with excitation intensity (for steady-state PL) and with time (after a laser pulse) for sample 1007.In steady-state PL spectra, the GL band appears to contribute only at high excitation intensity as a weak shoulder to the stronger YL band.The RL, YL, and GL bands significantly overlap and are difficult to resolve.To deconvolute the bands, we simulated the shapes of the YL and GL bands by using Eq. ( 1), with the parameters given in the captions of Figs. 2 and 4, respectively, while the shape of the RL band was taken from other samples in which the RL band was the dominant defect-related PL band [28].The intensities of the RL band in samples RS280 and 1007 were almost the same, whereas the YL band was at least an order of magnitude stronger in sample 1007 as compared to sample RS280 (where it could not be seen in the steady-state PL spectra).
In the PL spectrum obtained after a pulsed excitation, the GL band and the YL band can be clearly seen at short and long time delays, respectively (Fig. 12).In contrast to sample RS280 (with ω max = 2.1 eV for the YL band), the YL band has a maximum at 2.2 eV and the same shape as for GaN samples grown by MOCVD.An example with the parameters of the YL band with ω max = 2.1 eV is shown in Fig. 12 as the dashed curve 2 .The different positions of the YL band maxima in time-resolved PL measurements for samples RS280 and 1007 are the first indication of the fact that the YL bands in these two samples are caused by two different defects.An additional piece of evidence for this follows from the analysis of the decays of the YL and GL bands after a laser pulse, which is discussed below.
Decay of the GL and YL intensity after a laser pulse
Figure 13 shows the decay of PL intensity at 2.3 eV and 100 K after a laser pulse for three HVPE-grown samples.Since PL intensity integrated over photon energies is nearly the same for the YL and GL bands when they are normalized at their maxima, and the PL intensities at 2.3 eV for the normalized bands are equal (∼85% from their peak intensities), the PL intensity at 2.3 eV is proportional to the number of photons emitted at any time via both the YL and GL mechanisms.Moreover, the relative number of photons emitted separately via the YL and GL mechanisms can be found by integrating this intensity over time in the range of short time delays (where the GL band dominates and the contribution of the YL band is negligible) and at longer time delays (where the GL band vanishes and only the YL band remains).The dependence of PL intensity on time, I PL (t), can be fit with the following equation: The exponential decay for times shorter than 10 −5 -10 −4 s corresponds to the decay of the GL band with lifetime τ ranging from 1.8 to 4.5 μs for different samples.For longer time delays, the nearly exponential decay corresponds to the decay of the YL band with τ ranging from 0.3 to 1.85 ms.The PL intensities were integrated over time, separately for the short-living GL and for the long-living YL (inset in Fig. 13).The ratio of the GL band intensity to the YL band intensity in these measurements is 2.4 for sample B73, 1.7 for sample RS280, and 0.18 for sample 1007.Note that these measurements were conducted at high excitation intensity, corresponding to the saturation regime (Fig. 10).If the GL and YL bands in these samples were related to the −/0 and 0/+ states of the same defect and if after each pulse, equilibrium in the dark was completely achieved, then the integrated over time intensities of the GL and YL bands in time-resolved measurements should be equal.Indeed, when all the C N defects are saturated with two holes, the same number of photons should be emitted from these defects when they lose the first hole in the process of fast electron-hole recombination (the GL band) and when they subsequently lose the second hole at longer time delay (the YL band).The small prevalence of the time-integrated GL intensity over the YL intensity in samples B73 and RS280 can be explained by incomplete restoration of the dark equilibrium.Such an assumption is supported by the observation of a slower-than-exponential tail of the YL band (Fig. 13).This may be due to small contributions from donor-acceptor pair (or shallow donor-deep donor pair) transitions at 100 K, which persist for a long time [32].Since the restoration appears to be incomplete (the next pulse arrives before all the defects become completely filled with electrons), the integrated over time intensity of the YL band related to the −/0 level of C N is expected to be lower than that of the GL band.This agrees with the data for samples B73 and RS280 in the inset to Fig. 13.However, the prevalence of the YL intensity over the GL intensity by almost an order of magnitude in sample 1007 indicates that a stronger YL band in this sample is not related to the C N defect.We suggest that this stronger YL band is actually related to the C N O N complex.
Formation energies and the choice of chemical potentials
Hybrid functional calculations, which have become widespread in semiconductor defect physics, are capable of accurately reproducing some of the defect properties, such as optical and thermodynamic transition levels [33].On the other hand, the computed formation energies are difficult to directly compare with experiment, which is in part due to the difficulties in determining the elemental chemical potentials.In previous studies of defects in semiconductors, chemical potentials were often extracted from the bulk or molecule calculations of the most stable phases of chemical elements.Computed total energies of diamond [9] or graphite [34] were used for carbon chemical potentials and energies of oxygen and nitrogen in O 2 and N 2 molecules for O and N chemical potentials [7].In principle, the chemical potentials for elements involved in the sample growth should be obtained from the formation enthalpies of the phases competing with the growth of GaN.Therefore, the chemical potential of oxygen, obtained from Ga 2 O 3 rather than the O 2 molecule, is expected to better represent the formation energy of the oxygen defect in GaN (such as O N ).In addition, the competing phase needs to be identified for the chemical potential of nitrogen, since formation of this defect (O N ) depends on the energy balance of exchanging oxygen and nitrogen in the GaN lattice.For nitrogen, the formation of ammonia might represent a competing phase to the formation of GaN, if the growth involves significant amounts of hydrogen.To be consistent, one would need to apply the same procedure for all elemental chemical potentials; i.e., identify the competing phases for all elements involved in material formation and pick the lowest energy phases as those limiting the sample growth.However, given the variety of regimes and methods of growth, it is difficult to reliably predict all possible competing phases for all elements in the sample.In the above example of O N , a growth regime which does not involve hydrogen would result in a different growth-limiting phase determining the chemical potential of nitrogen.Coupled with the nonequilibrium nature of crystal growth, this results in a wide variety of defect properties for different samples.Thus, it is difficult to expect that a single value of the chemical potential can lead to the computed defect formation energies that accurately represent actual defect concentrations in various samples.Therefore, only average general trends in defect formation could be captured if some averaged values of the chemical potentials can be determined.
Recently, Lany [35] suggested that the appropriate choice of chemical potentials (along with corrections to the host semiconductor band edges) results in accurate computed formation enthalpies of solid compounds [36,37].These atomic chemical potentials, or fitted elemental-phase reference energies (FERE) [37], are obtained by fitting their values into a large set of measured formation enthalpies.As a result, FERE chemical potentials provide a better error cancellation for predicting formation enthalpy values of binary, ternary, and quaternary compounds, when chemical bonding is formed between metals and nonmetals, and the energy differences have to be computed between chemically different systems.Thus, compared to the chemical potentials obtained from calculations of elemental bulk phases or molecules, the FERE energies can provide more reliable average formation energies of defects.
In this paper, for the chemical potentials of oxygen and nitrogen, we used FERE energies obtained from the HSE hybrid functional [36].The resulting negative formation energy of O N (see below), is in accord with a significantly larger absolute value of (negative) formation enthalpy of gallium oxide, compared to that of gallium nitride.For comparison we also calculated formation energies of oxygen related defects using the chemical potential of oxygen obtained from the formation enthalpy of Ga 2 O 3 .The chemical potentials of carbon, silicon, and gallium have been obtained from HSE calculations of bulk diamond, silicon, and metal orthorhombic gallium.We need to stress that optical and thermodynamic transition levels are unaffected by the choice of chemical potentials, since they are formation energy differences.Therefore, unlike the absolute values of the formation energies, the computed transition levels can be directly compared with the experiment.
Gallium vacancy complexes
First, we present the results of calculations for the V Garelated complexes.For more than a decade, a V Ga or its complex with oxygen were considered to be the major candidate for the defect causing the YL band both by experimentalists and theorists [5].Previous density functional theory (DFT)-based calculations predicted that the V Ga is a deep acceptor with multiple charge states, from neutral to 3− [38,39].Therefore, a logical assumption was that in n-type GaN, positively charged shallow donors, such as silicon or carbon substituting gallium (Si Ga , C Ga ) and oxygen substituting nitrogen (O N ), could be bound to the 3− charged V Ga to form complexes.Based on the PL measurements and extensive positron annihilation data, it was suggested that the V Ga O N complex is responsible for the YL and GL bands in n-type GaN [5,6].Here, we examine the case of the vacancy-containing complexes in GaN by using the hybrid functional calculations.
Figure 14 shows the formation energies as a function of the Fermi energy in the band gap for the isolated V Ga and three V Ga complexes in both Ga-rich and Ga-poor growth conditions.The slopes of the formation energy lines represent the charge states of a defect, and the points of intersection indicate the thermodynamic transition levels.The V Ga C Ga complex exhibits high formation energies in both Ga-rich and Ga-poor environments and therefore is unlikely to form.Furthermore, the computed binding energy for this complex is negative, implying that the interaction between V 3- Ga and C + Ga is repulsive.Both V Ga Si Ga and V Ga O N complexes are stable and have relatively low formation energies, especially in a Ga-poor growth environment.However, as shown in Fig. 14, the −/0 and 2−/− thermodynamic transition levels for these complexes are too high above the VBM to be responsible for either the YL band or the GL band.In particular, the V Ga O N complex forms a negative-U center, where the −/0 transition level is higher than 2−/− level, 1.85 and 1.76 eV above the VBM, respectively.The related PL bands in n-type GaN are expected to have a maximum at 1.53 eV (for transitions via the 2−/− level) and 1.24 eV (via the −/0 level).Thus, V Ga O N complex would produce PL bands in the infrared region.Similar results are obtained for the V Ga Si Ga complex, where the −/0 and 2−/− thermodynamic transition levels are computed to be at 1.74 and 2.13 eV above the VBM (Fig. 14).This leads to optical transitions of 1.13 and 0.7 eV for transitions via the −/0 and 2−/− transition levels, respectively, which are also in the infrared region.Finally, as shown in Fig. 14, the isolated V Ga also has a deep 3−/2− transition level (2.06 eV above the VBM), and the PL band maximum (if the V Ga is present and behaves as a radiative defect) would be also in infrared.Similar value for the 3−/2− transition level was recently obtained by Gillen and Robertson by using the screened-exchange local density approximation (LDA) method [40].Thus, neither isolated V Ga nor the V Ga -containing complexes can account for the YL or GL bands in GaN. Figure 15(a) shows the formation energies of carbon-and oxygen-related defects as a function of the Fermi energy in the band gap.Overall, in n-type GaN, the formation energies of oxygen donors are significantly lower (by ∼3.8 eV) than those of carbon acceptors, while the formation energy of the C N O N complex is ∼1.8 eV lower than that of C N .These results are obtained using the elemental chemical potentials described in Sec.III D 1.In Fig. 15, the chemical potential of nitrogen is not adjusted by the formation enthalpy of GaN, and the results represent the nitrogen-rich conditions, as commonly referred to in the literature.While oxygen is the most abundant donor in GaN in most cases, the concentration of carbon can exceed that of oxygen in some MOCVD samples (Table I).However, even with the same growth method (MOCVD), both conductive samples (with more oxygen donors) and insulating samples (with more carbon) can be obtained.This suggests that the absolute values of formation energy computed with any theoretical method, which assumes equilibrium conditions, are at best rough guidelines for expected relative concentrations of impurities.The SIMS measurements suggest that the nonequilibrium incorporation of carbon into GaN (in either the C N acceptor or C N O N configuration) in some cases could be similar to that of the oxygen donor O N .For comparison, dashed lines in Fig. 15(a) show the results for O N and C N O N computed using the oxygen chemical potential obtained from Ga 2 O 3 .In this case, the formation energies of C N and O N become comparable (which could be the case in some MOCVD samples).Thus, different competing phases during different regimes of growth can create favorable conditions for the formation of either isolated carbon acceptor C N , in some cases, or the C N O N complex, in other cases.
Formation of carbon-and oxygen-related defects
The thermodynamic transition levels of the C N acceptor, C N O N complex, and O N donor have been published elsewhere [7,9].The C N acceptor is found to have two transition levels [Fig.15(a)]: the 0/+ transition level at 0.48 eV above the VBM and the −/0 transition level at 1.04 above the VBM.Interestingly, a defect commonly thought of as an acceptor can also exhibit a donor-like + charge state.This is due to several electronic states that the defect creates in the band gap.For example, both C N and C N O N create three electronic defect states in the band gap (shown in Ref. [7]).In the neutral state of C N , two of these defect states are occupied by electrons (all three states are occupied in C N O 0 N ).The addition of an electron leads to the −/0 transition level, and the removal of an electron leads to the 0/+ level in the band gap.Detailed calculations show that numerous defects have multiple charge states in wide band gap semiconductors, especially defect complexes that often create multiple defect states in the band gap.
The 0/+ transition level of the O N donor is found to be 0.14 eV below the CBM.The transition levels for the C N O N complex are 0.14 eV above the VBM for the +/2+ level and 0.75 eV for the 0/+ level.The −/0 transition level of C N is deeper than the 0/+ level of C N O N complex, suggesting that the YL band generated by C N should be shifted to energies lower than that from the C N O N complex.
Most importantly, the calculations presented in Fig. 15 confirm that the difference in the transition level structure for the two carbon defects (C N and C N O N ) allows for the possibility to distinguish between the different sources of the YL band.In samples where carbon is mostly bound into the C N O N complex, only a single YL band should be observed since there is only one possible transition, via the 0/+ level.However, if carbon exists mostly in the form of isolated C N defects, two PL bands are possible.Along with the YL band caused by transitions via the −/0 level of C N , the secondary PL band can be activated by increasing optical excitation intensity.This will happen when a second hole is captured by a neutral C N defect, with subsequent emission via the 0/+ level of the C N defect.
Optical transitions
Due to the existence of the three carbon-related transition levels, the 0/+ and −/0 levels for C N and the 0/+ level for C N O N , three well-separated PL bands can be observed experimentally in the visible part of the spectrum.Figure 16 illustrates the optical transitions via the C N and C N O N defects.
While optical transitions via the isolated carbon acceptor C N have been suggested to produce the YL band [8], we find that the transitions via this acceptor should produce a PL band with a maximum at 1.98 eV and a ZPL at 2.45 eV.In n-type GaN, this acceptor is negatively charged in the ground state (Fig. 15).Once the electron-hole pair is created with above-band-gap laser illumination, the hole is captured by the negatively charged acceptor in ∼10 −10 s, changing its charge state to neutral (if the defect were excited resonantly with below-band-gap light, the maximum of the characteristic excitation band would be at 2.81 eV, according to our calculations).Since the PL lifetime is relatively long, a second hole can be captured by the neutral C N defect due to the existence of the 0/+ transition level of C N in the band gap.As a result, radiative recombination can occur, with a free electron recombining with a hole at C + N .This transition is shown with a downward arrow in Fig. 16(b).The energy of this transition is computed to be 2.59 eV and is between green and blue in the visible range.The ZPL for this transition is computed to be 3.00 eV.The yellow (or orange) PL band with a maximum at 1.98 eV is generated by the C N defect and should have a lower PL band maximum than the PL band generated by the C N O N complex [Fig.16(c)].The C N O N complex is expected to produce a PL band with a maximum at 2.25 eV and a ZPL at 2.73 eV.
Overall, the calculated trends are such that in samples where carbon is bound into the C N O N complex, only one PL band (yellow) is possible, while in samples where the carbon is predominantly isolated as the C N acceptor, two PL bands are possible: yellow (or orange) and green (or green-blue).These trends are in quantitative agreement with experimental data, as will be discussed in Sec.IV.
Finally, it may appear surprising that both donors and acceptors can cause the YL band in GaN.It is commonly thought that an acceptor is a much more natural candidate for this process, since a negatively charged acceptor should capture holes more efficiently.However, a neutral deep donor is also capable of capturing a hole, albeit with a capture cross section lower by about an order of magnitude [41].In addition, some published results of electron paramagnetic resonance measurements indicate that the YL band is associated with a deep donor rather than an acceptor [42].The g factors determined for the YL band with a maximum at 2.2 eV (g || = 1.989 and g ⊥ = 1.992) are smaller than the free electron value of g, which indicates that the related defect may indeed be a deep donor [42].
Stability of the C N O N complex
The binding energy B of the C N O N complex for a range of Fermi energies is shown in Fig. 15(b).While this complex is unstable in p-type GaN, the binding energy of this complex in n-type GaN is 0.46 eV.Since the complex is formed by the next nearest neighbor atoms in the GaN lattice, this binding energy is relatively low.However, the binding energy provides limited information about the stability of the complex.In order to estimate the stability of the complex, we calculated the C N O N complex dissociation barriers using HSE and generalized gradient approximation (GGA).
The C N O N complex can dissociate by the jump of either a C or O atom into an interstitial site, leaving behind a nitrogen vacancy (V N ).However, our HSE calculations for n-type GaN show that the formation energies are high for stable interstitial O i and split-interstitial C i : 3 to 5.5 eV higher than those of the O N and C N defects, which agrees with previously published results [9,43,44].Additionally, our calculations show that in the presence of the V N formed in place of the C N or O N defects, both the split-interstitial carbon and interstitial oxygen are unstable.Therefore, the dissociation of the C N O N complex via diffusion of either oxygen or carbon into the nearest interstitial sites is energetically unfavorable.
Even if a V N was already present as a nearest neighbor of the C N O N complex (which is unlikely, due to the negligible concentrations of V N in n-type GaN and the absence of attractive interaction between V N and C N O N ), migration of oxygen into the V N would not be favorable.The nudged elastic band calculations based on GGA reveal a diffusion barrier for this process of 1.7 eV.Assuming the typical phonon frequency of 10 13 s −1 and following Ref.[45], we estimate that in this case, the complex remains stable for temperatures up to ∼660 K. Thus, when the C N O N complexes are formed during growth, they are likely stable.
A. Yellow and blue luminescence bands in undoped and C-doped GaN
In MOCVD-grown n-type GaN, only the YL band is observed in the defect-related part of the PL spectrum (Fig. 2).With increasing excitation intensity, the YL band intensity saturates, and no other bands appear at higher photon energies.If the YL band in this sample was caused by electron transitions via the −/0 state of the isolated C N defect, then it is expected that a PL band related to transitions via the 0/+ level should emerge at higher photon energies, which is not observed in Fig. 2. Thus, the absence of any PL band in the range from 2.6 to 3 eV in n-type GaN (Fig. 2) is an indication that the YL band in the MOCVD-grown GaN (sample EM1256) is not related to the isolated C N defect.In contrast, the C N O N complex can explain the YL in this sample, since it does not have the transition levels necessary to produce an additional PL band in this range.
In the literature, there is controversial information regarding the blue band in GaN, which, according to Ref. [9], could be assigned to the 0/+ transition level of the C N acceptor.A careful analysis suggests that this is unlikely.Indeed, a blue band is often observed in C-doped GaN, along with the YL band [10][11][12][13]46].However, closer inspection of the luminescence spectra in these papers allows us to conclude that at least two different defect-related blue bands were observed.A band with a maximum at 2.85-2.86eV (the BL band) was observed in GaN grown by molecular beam epitaxy (MBE) [12,13], and a broader band with a maximum at 3.0 eV (the BL2 band) was observed in GaN grown by MOCVD [10,11].In Ref. [12], where the excitation intensity was extremely high (400 kW/cm 2 ), the intensity of the YL band was about an order of magnitude higher than that of the BL band in all of the high-resistivity GaN samples doped with carbon.This is inconsistent with a model according to which the YL and BL bands are caused by two charge states of the same defect [9].Indeed, the intensity of the YL band should be much lower than that the BL band since the defect is almost completely saturated with holes in such experimental conditions.At lower excitation intensities (20 W/cm 2 ), the intensity of the BL band (relative to the YL band) decreased with increasing concentration of C from 2 × 10 18 to 2 × 10 19 cm −3 [13].We assume that the BL band observed in Refs.[12,13] is caused by the Zn Ga acceptor, since the BL band with a maximum at 2.9 eV is strong even at low levels of contamination with Zn (lower than 10 16 cm −3 ) [5].It is also unlikely that the BL band in C-doped GaN is related to Mg, because the Mg-related BL band appears only in GaN heavily doped with Mg, and it greatly shifts with increasing excitation intensity [47].No shifts were observed for the BL band in the MBE-grown GaN.[13].
In MOCVD-grown GaN, a broad band with a maximum at 3.0 eV is identified as the BL2 band [24,25].The BL2 band is quenched above 75 K with an activation energy of about 150 meV [25], which is very similar to the results reported in Ref. [11].It is interesting to note that a very similar quenching of the BL2 band with increasing temperature from 15 to 150 K and the dominance of the Zn-related BL band at higher temperatures was observed in Refs.[13,25].The BL2 band demonstrates a characteristic metastable behavior; namely, it bleaches considerably under continuous above-band-gap illumination, as the YL band simultaneously rises [25].The bleaching has been attributed to a recombination-assisted dissociation of a defect complex, apparently containing hydrogen as a component [5].A very similar behavior of the 3.0 eV band in C-doped GaN was reported in Ref. [11].Note that a strong blue band with a maximum at 3.05 eV was observed in C-doped ([C] = 1.5 × 10 18 cm −3 ), conductive (n = 10 18 cm −3 ) GaN grown by MOCVD [46].The blue band in Ref. [46] became much stronger than the YL band after treatment in hydrogen plasma at 200 °C for 1 h.This may indicate that the authors of Ref. [46] observed the BL2 band related to a defect complex containing hydrogen.In some GaN samples (undoped, C doped, and Fe doped), the fine structure of the BL2 band is observed, with the ZPL at 3.34 eV [25].From the position of the ZPL, it was suggested that the transition level responsible for the BL2 band is located at 0.15 eV above the valence band [24].Summarizing the above information, we suggest that blue bands sometimes observed in C-doped GaN have no relation to the isolated C N defect.
B. Yellow and green luminescence bands in GaN grown by HVPE
The GL band with a maximum at 2.4 eV is observed only in high-quality GaN grown by the HVPE technique.In freestanding GaN (sample B73), the defect responsible for the GL band is the dominant deep-level defect, with a concentration of about 10 15 cm −3 (Ref.[6]).From temperature-dependent Hall effect measurements, the total concentration of acceptors in a similar sample has been estimated to be 2.4 × 10 15 cm −3 (Ref.[48]).From the SIMS analysis of similar freestanding GaN, it was found that the concentrations of oxygen and carbon are on the order of 10 16 cm −3 each, where the O N defect (with a concentration of 7.8 × 10 15 cm −3 from Hall effect measurements) is the main shallow donor in these samples [49,50].Since the GL band intensity increases as a square of the excitation intensity [6], it is reasonable to attribute this band to transitions of electrons from the conduction band to the 0/+ level of the C N defect.In this case, the YL band can be caused by transitions via the −/0 level of the same defect.
Analysis of the time-resolved PL spectra in GaN samples exhibiting a strong GL band (Fig. 9 for sample RS280 and Fig. 4 in Ref. [51] for freestanding GaN) indicates that the YL band, related to the −/0 level of C N , has a maximum at 2.1 eV.In these high-purity HVPE samples, the integrated over time intensity of the YL band after pulsed excitation is only slightly lower than that of the GL band (inset in Fig. 13), which is in agreement with the assumption that the YL band and the GL band are associated with transitions via different charge levels of the same defect.
However, in less pure HVPE samples (such as sample 1007) and apparently all MOCVD samples, another defect is responsible for the dominant YL band, which has a maximum at 2.2 eV.Previously, we attributed this YL band to the C N O N complex [7].The saturation of this YL band in MOCVD samples is not followed by the emergence of any other PL band at higher photon energies (Fig. 2).This suggests that in MOCVD samples the YL band is produced by the C N O N complex, and isolated carbon C N is not found.
On the other hand, in HVPE-grown GaN layers on sapphire (more than 20 samples studied in this paper), the intensities of the YL bands originating from the C N and C N O N defects may be comparable.For the samples of highest purity, such as sample RS280, the C N -related band is the dominant YL band.In these samples, the integrated over time intensities of the YL and GL bands after a laser pulse are nearly equal to each other, which is consistent with the assignment of these bands to two different charge states of the same defect.In less pure samples, such as sample 1007, the C N O N -related YL band is dominant.In this case, the integrated over time intensity of the YL band is about 10 times higher than that for the GL band.Thus, we expect that in this sample, the concentration of the C N O N defects is higher than that of the C N defects by roughly an order of magnitude.It appears that only in GaN samples with very low concentrations of carbon and oxygen impurities, the isolated C N defects may be the dominant defects causing the YL and GL bands, whereas in samples with relatively high concentration of either C, O, or both, the C N O N complexes are likely to be formed and will cause the YL band but not the GL band.
C. Comparison of theory and experiment
The experimental and theoretical findings are summarized in Table III.The characteristic excitation band maximum, ω exc max , for the YL band (assumed to be related to the C N O N complex) in GaN was estimated to be 3.19 eV (Ref.[52]) and 3.32 eV (Ref.[53]) from the analysis of the shape of the PL excitation (PLE) spectrum.Note that a significant part of the characteristic excitation band was obscured in these experiments due to the contribution of band-to-band excitation in the PLE spectrum.We expect that this may result in an uncertainty of about ±0.1 eV in the determination of ω exc max .The experimental value of the ZPL for the YL band (2.64-2.70 eV) was found as the middle point between the threshold of the PL band and that of the PLE band [5,52].The experimental values of ZPL for the C N -related GL and YL bands in the studied HVPE-grown GaN samples were estimated from the best fits of the PL band shape with Eq. (1).To account for possible errors in these estimates, we rounded the values and added tilde marks in Table III.The distance of the defect level from the VBM, E A , was calculated as the difference between the band-gap energy and the ZPL.
Overall, the experimental values agree very well with the calculated ones.In most cases, the disagreement between calculations and experiment does not exceed 0.1 eV.The close agreement allows us to draw the conclusion that there are two separate sources of the YL band in different samples, namely, the isolated C N defect and the C N O N complex.These two cases can be distinguished by the presence of the GL band only in some high-quality samples containing mostly isolated C N acceptors.
D. V Ga -related defects as a source of the YL band
According to our hybrid functional calculations, neither V Ga nor its complexes with O N , Si Ga , or C Ga can be responsible for the YL band, since optical transitions for these defects are expected in the infrared part of the spectrum.The correlations reported in the literature between the YL band intensity and the concentration of the V Ga -containing defects could be accidental and the result of changes in other defect concentrations such as carbon.For example, in Ref. [54], the YL intensity increased almost linearly with the concentration of Ga vacancies for a few data points.However, the YL band in this paper was measured from the GaN/sapphire interface, where the concentration of various defects (other than V Ga ) is very high and may not be the same across the different samples.In another paper [17], the anticorrelation between the YL intensity and the concentration of the V Ga -containing defects was observed for three high-resistivity GaN samples.Xu et al. [55] noticed that in GaN samples with undetectable amount of Ga vacancies, the YL intensity was significantly higher than that in GaN samples containing V Ga with the concentration of ∼10 17 cm −3 .Suihkonen et al. [56] also noted that the YL band in GaN is not related to Ga vacancies because no increase of the YL intensity with increasing concentration of V Ga was observed.In spite of the contradictory data regarding the correlation between the YL intensity and the concentration of the V Ga -containing defects, the idea that the YL band is caused by the V Ga O N complex, at least in some GaN samples, is still widespread [57,58].This attribution is based on the results of early DFT calculations [38,39].However, our hybrid functional calculations show that the V Ga O N complex (as well as the isolated V Ga and its complexes with Si Ga or C Ga ) have very deep transition levels in n-type GaN.The expected PL bands from the V Ga -containing defects would be observed in the infrared part of the PL spectrum if these defects are radiative.Therefore, it is unlikely that V Ga -related defects are responsible for the observed YL and GL bands.
V. CONCLUSIONS
The GL band with a maximum at 2.4 eV is observed only in high-purity GaN samples grown by the HVPE technique.This PL band is attributed to transitions of electrons from the conduction band to the 0/+ level of the C N defect.According to first-principles calculations, the C N defect has two transition levels: the −/0 level at 1.04 eV and the 0/+ level at 0.48 eV above the VBM.In n-type GaN grown by HVPE, the YL band with a maximum at 2.1 eV is caused by a recombination of free electrons with holes at the −/0 level of C N at low excitation intensities, and it is replaced with the GL band at high excitation intensity.The intensity of the GL band increases as a square of the excitation intensity, in agreement with the assumption that two holes must be captured by the C N defect before the GL band can be observed.The ZPL of the GL band is determined to be at 2.9 eV by simulating the shape of the GL band using a one-dimensional configuration coordinate model.This value agrees with the activation energy of 0.54 eV for the thermal emission of holes from the 0/+ level of C N to the valence band, which explains the thermal quenching of the GL band for the temperature range 300-400 K.The YL band with a maximum at 2.1 eV is caused by transitions via the −/0 level of C N .It can be observed in time-resolved PL spectra from some HVPE-grown samples, at time delays after a laser pulse when the much faster GL band vanishes.In other samples, it is buried under a stronger YL band of different origin, which has a maximum at 2.2 eV.The 2.2-eV band is the dominant PL band in undoped GaN grown by MOCVD.In our opinion, this band is caused by transitions of electrons from the conduction band (or from shallow donors at low temperatures) to the C N O N complex.It appears that in the majority of GaN samples, carbon impurities form complexes with oxygen and produce the YL band at 2.2 eV.Only in HVPE-grown GaN with a low concentration of carbon and oxygen defects, can luminescence from isolated C N defects be observed as the YL band at 2.1 eV, which at high excitation intensities transforms into the GL band with a maximum at 2.4 eV.
FIG. 1 .
FIG. 1. (Color online) Schematic band diagram and predicted transition levels for the C N O N complex and the isolated C N defect.Calculations predict that the C N O N complex forms the 0/+ transition level in the band gap, while C N forms two transition levels: (−/0 and 0/+.The C N O N complex is expected to generate only the YL band.The C N defect can generate the YL band and an additional, higher energy band after the YL is saturated.
FIG. 5 .
FIG. 5. (Color online) PL spectra at selected temperatures for sample B73.The lines show steady-state PL spectra at P exc = 0.3 W/cm 2 .Triangles show time-resolved PL spectrum measured at 300 K.
FIG. 11 .
FIG. 11. (Color online)The steady-state PL spectra at T = 18 K and different excitation intensities for sample 1007.In the defectrelated region of the spectra, the contributions from four PL bands can be found: UVL, GL, YL, and RL.The dashed and dotted curves show an example of deconvolution.The crosses show the sum of the GL, YL, and RL bands used for the deconvolution of the PL spectrum at P exc = 0.19 W/cm 2 .
FIG. 13 .
FIG. 13. (Color online) Decay of PL after a laser pulse for three HVPE GaN samples at 100 K.The emission photon energy is 2.3 eV, at which the intensities of the GL and YL bands are close to the intensities in their maxima.The intensity for sample 1007 is multiplied by 0.1 for clarity.The dashed and dash-dotted curves are calculated using Eq.(5), with the following parameters: I PL (0) = 1.6 and τ = 3.6 μs (GL band) and I PL (0) = 1.95 × 10 −3 and τ = 1.3 ms (YL band) for sample B73; I PL (0) = 0.23 and τ = 4.5 μs (GL band) and I PL (0) = 2.5 × 10 −4 and τ = 1.85 ms (YL band) for sample RS280; and I PL (0) = 0.018 and τ = 1.8 μs (GL band) and I PL (0) = 6 × 10 −4 and τ = 0.3 ms (YL band) for sample 1007.The inset shows the integrated over time intensities of the GL and YL band components in relative units.
FIG. 14 .
FIG.14.(Color online) (a) Formation energies of the V Ga and its complexes as a function of the Fermi energy for Ga-rich (a) and Ga-poor (b) growth conditions.For V Ga O N , the 2−/0 charge state line intersection is at ∼1.8 eV above the VBM, with the −/0 and 2−/− transition levels being very close to each other: 1.85 and 1.76 eV, respectively.
FIG. 15 .
FIG. 15. (Color online) (a) Formation energies of carbon acceptor C N , oxygen donor O N , and donor-acceptor complex C N O N defects in GaN as a function of the Fermi energy.Solid and dashed lines for O N and C N O N correspond to two different oxygen chemical potentials (see text).Dotted lines indicate transition levels responsible for the observed PL bands.(b) Binding energy of the C N O N complex.
FIG. 16 .
FIG. 16. (Color online) Schematic configuration coordinate diagrams for the C N acceptor (a) and (b) and for the C N O N donor (c).The optical transitions are shown with arrows: downward for PL (emission) and upward for the resonant excitation of a defect (absorption).(a) Absorption and emission via the (−/0 transition level of C N ; (b) transitions via the 0/+ level of C N ; (c) transitions via the 0/+ level of the C N O N complex; and (d) band diagram with transition levels.
TABLE I .
Parameters of GaN samples analyzed in this paper.
TABLE II .
Concentration of impurities (per cubic centimeter) from SIMS measurements.
TABLE III .
[5,52]ison of theory and experiment for observed PL bands and suggested defect configurations.Experimental values are taken from this work and from Refs.[5,52]. | 16,574.6 | 2014-12-12T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Competitive endogenous network of circRNA, lncRNA, and miRNA in osteosarcoma chemoresistance
Osteosarcoma is the most prevalent and fatal type of bone tumor. Despite advancements in the treatment of other cancers, overall survival rates for patients with osteosarcoma have stagnated over the past four decades Multiple-drug resistance—the capacity of cancer cells to become simultaneously resistant to multiple drugs—remains a significant obstacle to effective chemotherapy. The recent studies have shown that noncoding RNAs can regulate the expression of target genes. It has been proposed that “competing endogenous RNA” activity forms a large-scale regulatory network across the transcriptome, playing important roles in pathological conditions such as cancer. Numerous studies have highlighted that circular RNAs (circRNAs) and long noncoding RNAs (lncRNAs) can bind to microRNA (miRNA) sites as competitive endogenous RNAs, thereby affecting and regulating the expression of mRNAs and target genes. These circRNA/lncRNA-associated competitive endogenous RNAs are hypothesized to play significant roles in cancer initiation and progression. Noncoding RNAs (ncRNAs) play an important role in tumor resistance to chemotherapy. However, the molecular mechanisms of the lncRNA/circRNA-miRNA-mRNA competitive endogenous RNA network in drug resistance of osteosarcoma remain unclear. An in-depth study of the molecular mechanisms of drug resistance in osteosarcoma and the elucidation of effective intervention targets are of great significance for improving the overall recovery of patients with osteosarcoma. This review focuses on the molecular mechanisms underlying chemotherapy resistance in osteosarcoma in circRNA-, lncRNA-, and miRNA-mediated competitive endogenous networks.
However, long-term chemotherapy poses the risk that the patient's cells will develop resistance to the chemotherapeutic drugs, eventually, eventually culminating in OS recurrence, distant metastasis, and treatment failure.
The resistance of OS to therapy is intimately linked with multidrug resistance, which emanates from prolonged exposure of cancerous cells to a particular chemotherapeutic agent.This exposure can lead to cross-resistance against diverse chemotherapeutic agents with varying structures and functions.The effectiveness of chemotherapy in OS is markedly impacted by multidrug resistance.Presently, there exist no conventional methods to surmount chemotherapy resistance in malignancies without inducing adverse side effects.The exploration of novel generations of antitumor medications to combat tumor resistance has become a pivotal concept in the realm of cancer therapy.
For the purpose of curtailing OS recurrence and metastasis rates, it becomes of paramount significance to elucidate the manifold resistance mechanisms of OS against chemotherapeutic drugs and to investigate potential strategies for reversing this process.This review delves into the intricate mechanisms of drug resistance in OS, with particular emphasis on circRNA-, lncRNA-, and miRNA-mediated competitive endogenous networks.
Noncoding RNAs
It has been demonstrated that noncoding RNAs (ncR-NAs), such as circular RNAs (circRNAs), long noncoding RNAs (lncRNAs), and microRNAs (miRNAs), play significant roles in the regulation of cancer biology.The primary function of ncRNAs, which are not transcribed into proteins, is to regulate gene expression.Numerous biological properties of ncRNAs have been identified over the past few years.In addition, a growing number of ncRNAs is thought to play roles in OS tumorigenesis, invasion, metastatic progression, apoptosis, and drug resistance [9].Salmena first proposed the competitive endogenous RNA hypothesis in 2011 [10], suggesting that lncRNA might regulate the expression of downstream genes by competitively binding to miRNA through microRNA response elements.Competitive endogenous RNA is not a novel ribonucleic acid molecule; instead, it is a novel mechanism for factor regulation.There is mounting evidence that noncoding RNAs, particularly circRNAs, lncRNAs, and miRNAs, form a competitive endogenous RNA-restrictive network with mRNAs, and this network influences drug resistance [11][12][13][14].Importantly, noncoding RNAs play a role in OS drug resistance due to their competitive endogenous RNA mechanisms.
circRNA mediated competitive endogenous RNA in OS chemoresistance
Since their discovery in 1976, circRNAs have been hypothesized to result from incorrect shearing and low expression levels [15,16].Numerous studies have demonstrated that circRNAs are involved in various pathophysiological processes in the body and are abnormally expressed in a wide range of malignant tumors, including OS [22], gastric cancer [17], bladder cancer [18], liver cancer [19], colorectal cancer [20], and breast cancer [21].Tumorigenesis and modifications in the biological functions of cells result from abnormal cir-cRNA expression [23,24].In addition, it has been demonstrated that circRNAs are associated with tumor therapy resistance; for example, circ_0026359 promotes Increases resistance to cisplatin through the miR-340-5p/LPAAT axis [27] hsa_circ_0004674 Regulates the miR-342-3p and FBN1 axis to promote OS doxorubicin resistance [33] CircPVT1 Via the miR-24-3p and KLF8, encourages OS cell proliferation and chemoresistance [28] TRIAP1 is regulated by miR-137, which in turn contributes to OS cells' doxorubicin resistance [37] Regulates ABCB1 and plays a role in OS cells' resistance to doxorubicin and cisplatin [29] circUBAP2 Boosts SEMA6D expression.The Wnt/-catenin signaling pathway can be activated to increase cisplatin resistance [30] hsa_circ_0000073 OS cells are more resistant to methotrexate because of it by upregulating NRAS and sponging miR-145-5p and miR-151-3p [40] circ_0081001 when knockdown, methotrexate sensitivity is increased by controlling the miR-494-3p and TGM2 axis [41] CircDOCK1 Via the miR-339-3p and IGF1R axis, aids in osteogenic sarcoma tumorigenesis and resistance to cisplatin [31] CircITCH Through the miR-524/RASSF6 axis, downregulation of circITCH encourages OS development and resistance to doxorubicin [38] Circular RNA LARP4 Sponging microRNA-424 increases OS chemosensitivity to cisplatin and doxorubicin [32] Circular RNA_ANKIB1 Via microRNA-26b-5p, it speeds up resistance to chemotherapies [39] cisplatin resistance in stomach cancer [25].The results of several studies have shown that circRNAs have a significant impact on chemotherapy resistance in OS (Table 1).
CircRNAs and cisplatin resistance of OS
The competitive endogenous RNA mechanism plays a major role in the biological functions of circRNAs in cells because it contains miRNA-binding sites [12,26].Circ-CHI3L1.2was found to be elevated in cisplatin-resistant OS cells [27], and the miR-340-5p and LPAAT axis could be used to make circCHI3L1.2-deficientOS cells more sensitive to cisplatin-resistant OS.They found that miR-340-5p could bind to circ-CHI3L1.2.Moreover, miR-340-5p's target, LPAATβ, had less protein expression when circ-CHI3L1.2 was knocked down.Notably, the effect of circ-CHI3L1.2knockdown was mitigated by the miR-340-5p inhibitor.According to their findings, the miR-340-5p-LPAAT axis was involved in circ-CHI3L1.2'scontribution to cisplatin resistance.The effects of circ-CHI3L1.2knockdown were partially reversed by miR-340-5p suppression, which suggested that there were additional downstream pathways besides the miR-340-5p-LPAAT axis.Other potential mechanisms should be explored in future studies.Numerous cancers have been linked to the oncogene CircPVT1.CircPVT1 was found to be up-regulated in OS tissues resistant to cisplatin, doxorubicin or methotrexate [28].The targeting relationships of circPVT1/miR-24-3p and miR-24-3p/KLF8 were verified.CircPVT1 could act as a sponge of miR-24-3p.A further confirmation of the negative regulation between circPVT1 and miR-24-3p was observed in OS cells.This research found that the overexpression of miR-24-3p inhibited the proliferation of OS cells, and increased the sensitivity of chemoresistant U2OS and MG63 cells to chemotherapy.The bioinformatics analysis suggested that KLF8 might be a downstream target of miR-24-3p.
The binding association between miR-24-3p and KLF8 was confirmed by dual-luciferase reporter and RNA pulldown assays.KLF8 transcription factor played a vital role in oncogenic transformation.Nevertheless, the existing literature lacks a thorough discussion of the oncogenic role of KLF8 and its underlying mechanism.KLF8 was highly expressed in OS cell lines and was even further upregulated in chemoresistant OS cells, as confirmed by qRT-PCR and Western blotting assessments.The expression of KLF8 exhibited a positive correlation with that of circPVT1, while it demonstrated a negative association with that of miR-24-3p.Collectively, through the axis of miR-24-3p and KLF8, circPVT1 promotes OS cell proliferation and drug resistance.CircPVT1 is involved in drug resistance in OS tumor cells through multiple pathways.The overexpression of circPVT1 is responsible for OS cell drug resistance to doxorubicin and cisplatin by controlling the ATP-binding box (ABC) transporter ABCB1 [29].OS cells resistant to cisplatin show elevated expression levels of CircuUBAP2 and SEMA6D.By activating the Wnt/β-catenin signaling pathway through the miR-506-3p/SEMA6D axis, circUBAP2 increases OS resistance to cisplatin [30].CircDOCK1 [31] encourages OS cells to become cisplatin-resistant via the miR-339-3p and IGF1R axes.The circRNA LARP4 increases OS chemotherapy sensitivity to cisplatin and doxorubicin by sponging microRNA-424 [32].The competitive endogenous RNA mechanism of circRNAs contributes to the resistance of OS to chemotherapy and several other biological functions.
CircRNAs and doxorubicin resistance of OS
By controlling miR-342-3p and FBN1, hsa_circ_0004674 [33] promotes resistance to doxorubicin through the Wnt/ β-catenin pathway, suggesting that hsa_circ_0004674 could be a promising target for OS resistance.The researchers found high levels of hsa_circ_0004674 expression in osteosarcoma cells and tissues that were resistant to doxorubicin.OS tumors became more sensitive to doxorubicin when hsa_circ_0004674 was knocked out.Moreover, their study discovered that miR-342-3p was under expressed in the doxorubicin-resistant OS tissues and cells, inhibiting OS doxorubicin resistance.The anti-miR-342-3p reversal effect on si-hsa_circ_0004674 suggested that hsa_circ_0004674 modulated the doxorubicin resistance of OS by targeting miR-342-3p.Studies unveiled that miR-342-3p targeted FBN1, whose aberrant expression is linked to the malignant phenotype of several tumors, such as ovarian cancer [34] and papillary thyroid carcinoma [35].The results showed that miR-342-3p increased resistance to doxorubicin through its interaction with FBN1.The research suggests that during the malignant progression of many cancers, the activity of the Wnt/β-catenin signaling pathway is significantly upregulated [36].Related studies of osteosarcoma have shown that activation of the Wnt/β-catenin pathway is associated with chemoresistance, whereas inhibiting this pathway has been demonstrated to enhance chemotherapy sensitivity.Silencing hsa_circ_0004674 was found to inhibit the activity of the Wnt/β-catenin pathway.Further analysis revealed that hsa_circ_0004674 had a positive regulatory effect on the activity of the Wnt/β-catenin pathway through the miR-342-3p/FBN1 axis.It has been reported that circPVT1 [37] was concerned with OS cell drug resistance to doxorubicin by controlling TRIAP1 through miR-137.circPVT1 knockdown could boost doxorubicin sensitivity by inhibiting doxorubicin-caused proliferation and doxorubicin -induced apoptosis in doxorubicin-resistant osteosarcoma cells in vitro.The mechanical analysis revealed that circPVT1 functioned as a miR-137 sponge to regulate TRIAP1 expression.Furthermore, a mechanistic analysis confirmed that the miR-137 inhibitor was able to partially reverse the inhibitory effect of silencing circPVT1 on the TRIAP1 level in doxorubicin-resistant osteosarcoma cells.This validates the role of circPVT1 as a miR-137 sponge that upregulates TRIAP1 expression.Through the miR-524/RASSF6 axis, circITCH downregulation promotes OS development and doxorubicin resistance [38].The interaction between circITCH, miR-524, and RASSF6 was confirmed through dual-luciferase reporter and RNA immunoprecipitation assays.By binding to microRNA-26B-5P and modulating EZH2, circular RNAANKIB1 promotes chemotherapy resistance in OS [39].The expression of miR-26b-5p was suppressed in both OS tissues and cells, as well as doxorubicin-resistant OS tissues and cells, while the levels of circ_ANKIB1 and EZH2 were increased.Circ_ANKIB1 binds to miR-26b-5p.MiR-26b-5p directly targeted EZH2, and increasing the levels of EZH2 reversed the effect of elevated miR-26b-5p on doxorubicin-resistant cells.In vivo, silencing of circ_ANKIB1 suppressed the growth of doxorubicin-resistant OS cells.
CircRNAs and methotrexate resistance of OS
It was discovered for the first time that hsa_circ_0000073 may enhance the proliferation, migration, invasion and methotrexate resistance of OS cells [40].It was discovered that the expression of hsa_circ_0000073 was highly upregulated in both OS cells and tissues, which in turn led to poor OS survival.In order to determine whether hsa_circ_0000073 is involved in the competitive endogenous RNA model, predictions were made and it was observed that miR-145-5p and miR-151-3p directly bind to hsa_circ_0000073.At the same time, miR-145-5p and miR-151-3p exhibited a negative correlation with hsa_circ_0000073.miR-145-5p and miR-151-3p directly regulate NRAS.In OS cells, hsa_circ_0000073 upregulates NRAS by inhibiting miR-145-5p and miR-151-3p.
According to their study, hsa_circ_0000073 may enhance the proliferation, migration and invasion of OS cells by directing the regulation of NRAS through miR-145-5p or miR-151-3p.The authors also hypothesized that methotrexate resistance in OS could be closely associated with the hsa_circ_0000073/miR-145-5p and miR-151-3p/ NRAS axes.Circ_0081001 [41] has been implicated in regulating the sensitivity of OS cells to methotrexate by controlling the miR-494-3p/TGM2 axis.In methotrexate-resistant OS tissues and cells, expression levels of Circ_0081001 and TGM2 were upregulated while miR-494-3p was downregulated.Interference with Circ_0081001 resulted in increased cell sensitivity to methotrexate by promoting apoptosis and inhibiting cell viability and metastasis in vitro.Furthermore, a molecular sponge effect of circ_0081001 on miR-494-3p led to the upregulation of TGM2 level.Knockdown of circ_0081001 inhibited methotrexate resistance by upregulating miR-494-3p and downregulating TGM2.The downregulation of Circ_0081001 improved methotrexate sensitivity of OS in vivo.
LncRNA mediated competitive endogenous RNA in chemoresistance of OS
LncRNAs are a class of RNA molecules with lengths ranging from 200 to 100,000 nucleotides.They regulate gene expression at various levels but do not encode proteins [42,43].Several lncRNAs have unusually high expression levels in cancer cells and can function as oncogenes or tumor suppressors, participating in the formation and spread of tumor cells [44].They also contribute biologically to resistance to chemotherapy.Numerous studies have demonstrated a connection between chemotherapy resistance and changes in the expression levels of certain lncRNAs in OS tumor cells [9,45] (Table 2).
LncRNAs play key roles in drug resistance.Generally, the levels of lncRNAs involved in OS drug resistance increase through a competitive endogenous RNA mechanism [46][47][48].
The researchers have confirmed that the lncRNA TTN-AS1 controls OS cell growth, apoptosis, and cisplatin resistance and promotes MBTD1 expression by targeting miR-134-5p [49].Patients with OS showed high levels of lncRNA expression.Drug resistance can also be reduced by downregulating TTN-AS1.Zhang et al. [50] found that the lncRNA KCNQ1OT1 was expressed in the tumors and adjacent tissues of 30 patients with OS.The lncRNA KCNQ1OT1 inhibited miR-129-5p expression, which in turn promoted cell proliferation, invasion, drug resistance, and LARP1 expression.DNMT1-mediated Kcnq1 expression increases with the knock-out of KCNQ1OT1, making OS cells more sensitive to cisplatin [51].By focusing on the miR-130a-3p/ SP1 axis, MIR17HG helps OS cells develop cisplatin resistance [46].Doxorubicin-resistant OS cells and tissues had lower levels of the lncRNA FENDRR, which was linked to worse prognoses in patients with OS [52].The lncRNA HOTAIR is upregulated in cisplatin-resistant OS tumor cells [47].By directly binding to and controlling miR-106a-5p, HOTAIR overexpression upregulates STAT3 expression, which is reduced in OS tissues and cisplatin-resistant cells.Cisplatin resistance and drug resistance-related gene expression in Saos2/cisplatin, MG-63/cisplatin, and U2-OS/cisplatin cells were diminished when HOTAIR was knocked down.OS tissues had considerably increased SNHG16 and ATG4B expression.
A higher level of SNHG16 expression is linked to a worse prognosis in patients with OS [48].
In OS-resistant HOS/cisplatin cells, one study found that the expression of lncRNA NORAD and MRP1 mRNA and protein was significantly elevated, while the expression levels of miR-410-3p were considerably minimized [53].When the lncRNA ANRIL was knocked down in U2-OS and Saos-2 OS cells, ANRILsilenced cells became more susceptible to cisplatin [54].In ANRIL-silenced cells, the level of miR-125a-5p, which binds to ANRIL, increased.Furthermore, there was a decrease in the expression of STAT3, which is a target of miR-125a-5p.The researchers demonstrated that by selectively regulating miR-125a-5p, sensitivity of OS cells to cisplatin was enhanced by suppressing lncRNA ANRIL expression.Wen et al. discovered that OS cells became cisplatin-sensitive when the lncRNA-SARCC was overexpressed [55].Using microarray analysis, the authors found that SARCC increased miR-143 expression in OS.In contrast, SARCC and miR-143 expression were downregulated in cisplatin-resistant OS cells, making them resistant to cisplatin.In OS, miR-143 directly targets hexokinase 2 (HK2), the key enzyme in glycolysis.The RNA network SARCC-miR-143-HK may regulate OS chemosensitivity.By sponging miR-140-5p, the lncRNA MSC-AS1 triggers osteogenic differentiation [56].In addition, when MSC-AS1 was silenced, cisplatin became more toxic to OS cells, and overexpression of MSC-AS1 in OS patients led to a worse prognosis.Increasing miR-142 to decrease CDK6 and deactivate the PI3K/AKT axis inhibited OS cell processes in tumor cells with silenced MSC-AS1.Previous research indicated significantly increased expression of the lncRNA OIP5-AS1 in cisplatin-resistant OS cells, leading to resistance through LPAAT, PI3K, AKT, and mTOR pathways [57].Knocking out OIP5-AS1 effectively reduced cisplatin resistance.Knockdown of OIP5-AS1 enhanced cisplatin sensitivity in OS via the miR-377-3p and FOSL2 axes [58], while the lncRNA ROR [59] mediated cisplatin resistance in OS by controlling ABCB1 through miR-153-3p.NCK1-AS1 silencing restrained OS cell proliferation, migration, and invasion, and heightened their cisplatin sensitivity [60].Cisplatin-resistant OS cells exhibited notable upregulation of lncRNA NCK1-AS1.Overexpressing miR-137 increased OS cells' sensitivity to cisplatin, but the effects were counteracted by high levels of NCK1-AS1 in cisplatin-resistant cells.Another study discovered elevated expression of the lncRNA DNAJC3-AS1 [61] in OS, decreasing OS sensitivity to cisplatin through a mechanistic process, which was reversed by downregulating the sense-cognate gene DNAJC3.Elevated lncRNA HOTTIP [62] promoted chemoresistance in OS by activating the Wnt/-catenin pathway.
In summary, understanding how changes in the expression levels of lncRNAs in OS tumor cells is
Prospects
In
Combination of at least two antitumor medications: increased lethality against tumor cells
Chemotherapy of osteosarcoma has evolved from the initial single-drug application to the current multidrug combination.The combination of a variety of chemotherapy drugs achieved a good therapeutic effect.At the same time, the efficacy and adverse reactions of the drug should be comprehensively considered when the combination of chemotherapy drugs is used.The adverse reactions should be minimized, and personalized treatment plans should be provided for patients.How to combine drugs reasonably become the future research direction of molecular targeted drugs.
Prolonging the exposure to chemotherapeutic drugs
The toxicity of a drug to normal tissue limits the potential dose, and the kinetics of the drug (including absorptive capacity, in vivo distribution, metabolism, and elimination) limit the concentration at which the drug can reach the tumor tissue.The recent advancements in cancer nanotechnology can offer chemotherapeutic drugs longer exposure times and extended circulating half-lives.Nevertheless, the most fundamental solution is to prevent and overcome multiple-drug resistance to chemotherapy drugs.
Build drug-resistant tumor animal models to test clinical relevance
Before the stage of clinical and translational medicine research, it is necessary to combine in vivo pharmacology methods and genomics analysis platforms.Through the creation of animal models with drug-resistant tumors, the mechanism behind OS resistance can be better understood, and an optimal dosage regimen can be established.This approach can lead to the design of more effective OS drugs that demonstrate clinical benefits, ultimately fostering a new generation of anticancer medications.The establishment of dependable preclinical tumor drug-resistant cell models holds significant importance for delving further into the attributes of drug-resistant cells and for identifying novel clinical therapeutic strategies.
Establish highly selected and annotated databases of germline and somatic mutant genes
Through genetic testing, the type of genetic variation and level of biomarkers in a patient's body can be determined, and clinicians can make decisions based on a comprehensive analysis of these indicators.The established databases include mutation points associated with drug responses, providing a description of the possible role of gene mutation sites in drug resistance and the level of evidence to effectively address possible drug intolerance or toxic side effects in patients with cancer.With the discovery, prediction, and clinical application of molecular diagnostic markers, precise treatments for different patients are also being promoted.The drug resistance of tumor cells can be viewed from an evolutionary perspective, and a variety of therapeutic methods used to combat tumor cells.The most important challenge in tumor resistance is to quickly identify biological indicators of multiple-drug resistance before the emergence of tumor resistance.Through high-throughput screening techniques and systems biology approaches, researchers can partially detect or predict the response of tumor cells to particular chemotherapeutic drugs, thus avoiding complications before administering chemotherapy.In the future, the research direction of tumor resistance may be to identify molecular markers of tumor resistance, predict and monitor chemotherapy efficacy, perform early detection and prognosis combined with laboratory and imaging examinations, and develop chemotherapy drugs for effective targets.
Conclusion
In this review, we discussed the molecular mechanisms underlying chemotherapy resistance in OS, especially in terms of circRNA-, lncRNA-, and miRNA-mediated competitive endogenous networks.CircRNAs bind to miRNAs and act as miRNA sponges, thus regulating target genes of miRNAs [61].This is known as competitive endogenous RNA mechanism.Through their interactions with miRNAs, circRNAs play an important regulatory role in tumorigenesis and tumor progression.The researchers have found abnormal expression of circRNAs/lncRNAs in drug-resistant OS cells, suggesting that circRNAs/lncRNAs play a role in chemotherapy resistance in OS.The role of circRNAs/ lncRNAs has also been explored.These findings provide a foundation for elucidating the mechanism of cisplatin resistance in OS and, eventually, new intervention targets for ncRNA-based therapeutics in OS, with the goal of preventing chemoresistance.The implications are significant, both in terms of advancing oncology research and actual patient outcomes.However, the research on the molecular mechanisms underlying chemotherapeutic drug resistance in OS is still in the early stages of development.Further studies are required to elucidate the involvement of ncRNAs in drug resistance of OS.
Table 1
circRNAs and osteosarcoma chemotherapy resistance Qin et al.European Journal of Medical Research (2023) 28:354 | 4,648 | 2023-09-16T00:00:00.000 | [
"Biology"
] |
Neutrino oscillations in discrete-time quantum walk framework
Here we present neutrino oscillation in the frame-work of quantum walks. Starting from a one spatial dimensional discrete-time quantum walk we present a scheme of evolutions that will simulate neutrino oscillation. The set of quantum walk parameters which is required to reproduce the oscillation probability profile obtained in both, long range and short range neutrino experiment is explicitly presented. Our scheme to simulate three-generation neutrino oscillation from quantum walk evolution operators can be physically realized in any low energy experimental set-up with access to control a single six-level system, a multiparticle three-qubit or a qubit-qutrit system. We also present the entanglement between spins and position space, during neutrino propagation that will quantify the wave function delocalization around instantaneous average position of the neutrino. This work will contribute towards understanding neutrino oscillation in the framework of the quantum information perspective.
I. INTRODUCTION
Neutrino oscillation is a well established phenomenon explained by quantum field theory (QFT). Pauli first proposed the neutrino to explain the continuous spectrum of electron in beta decay [1]. In the standard model (SM) description neutrinos are massless and a very weakly interacting particles. However, in order to give a correct interpretation to the experimental results it was established that the neutrinos are massive and leptons mix [2][3][4]. Neutrino oscillation, which implies neutrino can change from one flavor to another, is also a consequence of the neutrino masses and lepton mixing [5]. Hence, neutrino oscillations indicate an incompleteness of the SM and opens a window for physics beyond SM.
The last few years have seen an increasing interest by the physics community in reconstructing and understanding physics from a quantum information perspective, that is to say, "It from qubit" [6][7][8]. This approach will give us access to understanding various natural physical processes in the form of quantum information processing. This will also facilitate us to simulate inaccessible and experimentally demanding high energy phenomena in low energy quantum bit (qubit) systems. Parameter tuneability in protocols that can simulate real effects allows access to different physical regimes which are not accessible in real particle physics experiments. Thus, we can anticipate a significant contribution to our understanding of physics beyond known standard theories via quantum simulations.
Any standard quantum information processing protocols on a basic unit of quantum information, that is, the qubit, can be described using three steps : (a) initial state preparation, (b) evolution operations, and (c) measurements. In this work, starting from initial state preparation of qubit we will present a scheme of evolution that can simulate the three-flavor neutrino oscillation. To simulate neutrino oscillations where the dynamics of each flavor is defined by the Dirac equation, we will use discrete-time quantum walk (DTQW) evolution operators. The DTQW, defined as the evolution of wave packet in a superposition of position space can also be viewed as a physical dynamics of information flow [9,10] which can be engineered to simulate various quantum phenomena, for example, like Anderson localization [11], the Dirac equation [12][13][14][15], and topologically bound states [16]. Recent results have shown that one-dimensional DTQWs produce the free Dirac equation in the small mass and momentum limit [13]. The neutrino mass eigenstates being the solution of the free Dirac Hamiltonian was the main motivation for us to connect the simulation of the free Dirac equation and neutrino oscillation with the DTQW. This will also allow us to understand neutrino oscillation in the framework of DTQW whose dynamics is discrete-in both, space and time. The description of the dynamics in the form of a unitary operation for each discrete time step will help us to address the quantum correlations between the position space and spin degree of freedom as a function of time. This discrete approach will also lead towards simulating various high energy phenomena and addressing the dynamics of quantum correlations between different possible combination of Hilbert spaces involved in the dynamics.
To simulate oscillations between three neutrino flavors we present DTQW on a system with six internal degrees of freedom which physically can be realized using a single six-dimensional system or a three-qubit system or a qubit−qutrit system. With DTQW being experimentally implemented in various physical systems [17][18][19][20], simulation of the long and short range neutrino oscillations on different low energy physical systems will also easily be realizable as a function of the number of walk steps. By preparing different initial states, different types of neutrino oscillations can be simulated in a simple table-top experimental set-up which are not straightforward in real world neutrino oscillation experimental set-ups.
For the DTQW parameters which will simulate neutrino oscillation, we will calculate the entropy of the density matrix as a function of DTQW steps. This entropy which captures information content of the evolution of the neutrino density matrix can be effectively used to quantify the wave function delocalization in position space. We will also calculate the correlation entropy between the position space and particular neutrino flavor as measure of possible information extractable about the whole state of the neutrino wave function when we detect spin part of that particular flavor state. This simulation and information from entropy will contribute towards understanding the role of quantum correlations in neutrino oscillations reported recently [21,22]. Exploring neutrino physics in general from quantum information perspective will easily be accessible. This could further lead to a way to use quantum simulations and quantum information to study physics beyond SM and understand the quantum mechanical origin of the some of the interesting phenomena in nature.
The paper is organized as follows. In Sect. II we present a brief introduction of the neutrino oscillation theory. In Sect. III, we introduce the DTQW evolution, we use for simulation of neutrino oscillation. In Sect. IV we present the scheme for simulation of the threeflavor neutrino oscillation using one spatial dimensional DTQW. In Sect. V, we numerically simulate these oscillations and present the DTQW evolution parameters that recover the short and long range neutrino oscillations. In Sect. VI we present entanglement between position space and internal degrees of freedom of neutrino during prop-agation. Finally we will end with concluding remarks in Sect. VII.
II. PHYSICS OF THE NEUTRINO OSCILLATION
Here, we give a brief discussion of the theory of neutrino oscillations [23,24]. So far, experimentally three flavors of the neutrinos, ν e , ν µ , and ν τ , have been detected. The neutrino of a given flavor are defined by the leptonic W -boson decay. That is, the W -boson decays to a charged lepton (e, µ, or τ ) and a neutrino. We will define a neutrino as ν e , ν µ , or ν τ when the corresponding charged leptons are e, µ, or τ . Studies have reported that neutrinos have masses and lepton mixing means that there is some spectrum of the neutrino mass eigenstates |ν i .Using this we can write the neutrinos of definite flavor as a quantum superposition of the mass eigenstates [43] where α = e, µ, τ, and j = 1, 2, 3. U * αj is the complex conjugate of the αj th component of the matrix U . U is a 3 × 3 unitary matrix and is referred to as the Maki−Nakagawa−Sakata (MNS) matrix, or as the Pontecorvo−Maki−Nakagaea−Sakata (PMNS) matrix [25]. The matrix U and its decomposition can be written as where c ij ≡ cos θ ij and s ij ≡ sin θ ij with θ ij being the mixing angle, and α 1 , α 2 , and δ are CP-violating phases. The state |ν j ∈ H spin ⊗ span{|k j1 , |k j2 , |k j3 } is the mass eigenstate of the free Dirac Hamiltonian, where c is the velocity of light in free space, m j is the mass,ˆ p j is momentum operator corresponding to j th particle, with positive energy eigenvalue E j = | k j | 2 c 2 + m 2 j c 4 . Its propagation is described by the plane wave solution of the form where t is the time of propagation, k j is the three momenta, and x is the position of the particle in the mass eigenstate from the source point. As the neutrino mass is very small, they are ultra-relativistic particles (| k j | ≫ m j c) and we can approximate the energy as where E ≈ | k j |c, and the same for all j (taken for simplicity). Now, consider a neutrino beam ν α which is created in a charged current interaction. After time t, the evolved state is given by For simplicity we will work with one-dimensional space, so our choice will be k j = (k, 0, 0), the same for all j. This means As neutrinos are ultra-relativistic particles, we can also replace c × t by the traveled distance x. Using these assumptions and Eq. (5), the amplitude of finding flavor state |ν β in the original |ν α beam at time t, it is given by where, L ≈ ct is the travelled distance. Squaring it we find the transition probability ν α (t = 0) → ν β (t) and is given by where ∆m 2 jr ≡ m 2 j − m 2 r . If there is no CP violation we can choose the mixing matrix U to be real. This will ensure that we will not have an imaginary part in the oscillation formula. Including the factors and c, we can write the argument of the oscillatory quantity sin 2 ∆m 2 jr Lc 3 4E that appears in Eq. (9) by So for large L/E, neutrino oscillation provides experimental access to very tiny neutrino masses. In Fig. 1, we show the probabilities for the initial flavor |ν e to be in |ν e , |ν µ , |ν τ after time t, with Fig. 1 a showing long range and Fig. 1 b showing short range oscillation. These probability transition oscillation plots are obtained assuming the Normal Ordered (NO) neutrino mass spectrum (m 3 > m 2 > m 1 ). The oscillation parameters used here are given as follows [26]: As δ has not been determined by experiments, it can take a value anywhere between 0 to 2π and for simplicity we have taken δ = 0 for our oscillation plots.
III. DISCRETE-TIME QUANTUM WALK
The discrete-time quantum walk (DTQW) is a quantum analogue of the classical random walk which evolves the particle with an internal degree of freedom in a superposition of position space. The Hilbert space on which the walk evolves is H = H c ⊗ H p where H c is spanned by the internal degrees of freedom of the particle which hereafter will be called a coin space and H p is spanned by the position degree of freedom. Each step of the DTQW evolution operator W is a composition of the quantum coin operator C and a coin dependent position shift operator S, The identity operator I in (C ⊗ I) acts only on spatial degree of freedom and operator C acts only on the coin space. The operator C evolves the particles basis states to the superposition of the basis states and the operator S shifts the particle to the superposition of position states depending on the basis states of the particle. DTQW on a one-dimensional space is commonly defined on a particle with two internal degrees of freedom, | ↓ and | ↑ . Therefore, the coin operation C can be any 2 × 2 unitary operator and the shift operator which shifts the state by a distance a in position space will be of the form where The operator T − shifts the particle position to one step farther along the negative x-axis and T + shifts the particle position one step along the positive x-axis. This standard definition of DTQW evolves quadratically faster in position space when compared to the classical random walk. This description has been extended to systems with both higher spatial dimension and higher coin (particles internal) dimensions, respectively [27][28][29][30].
Here we will define the DTQW evolution in one spatial dimension x on a particle with d distinct coin space. We will set each discrete position space step to a and discrete time step to δt to be the same throughout the evolution of the walk. For a d-dimensional system with basis state |q where q ∈ {1, 2, 3, · · · , d} the spatial shift operation can be defined as Depending on the value of q, we will have two possible forms of T qq given by T − and T + with |x ∈ H p (the space has to be periodic or infinite) and |q ∈ H c , and the coin operation C is a d × d square unitary matrix. As the momentum operatorp is the generator of spatial translations in quantum mechanics we can write the components of the shift operators in the form where |k is a momentum eigenvector with eigenvalue k. In the following section we will develop on this de-scription of DTQW to simulate the neutrino oscillation probability.
IV. MIMICKING NEUTRINO OSCILLATION BY QUANTUM WALK
There are three mutually orthonormal Dirac mass eigenstates of neutrino, each of the them have two spin degrees of freedom. Hence, the complete neutrino dynamics in one spatial dimension is described using six internal degrees of freedom. To understand neutrino oscillation dynamics from DTQW perspective and simulate neutrino oscillation in any other physical system, we need to consider system in which we can access six internal degrees of freedom. Let us define the internal space basis as where we have used the vector representation equivalence, The dynamics of each flavor of the neutrino is defined using the Dirac Hamiltonian. Earlier results have reported a simulation of two state Dirac Hamiltonian for a massive particle [13,14]. Therefore, to simulate a six state neutrino dynamics we will form the set of three pairs, and define a DTQW with different coin parameters for each pair. For this purpose, we will represent the coin space in the form With three coins using different parameters the complete evolution operator composing of coin and shift operator will be in block diagonal form and one time step (δt) of the walk operator will look like where the quantum coin operation and coin state (spin) dependent position shift operators are defined as C j = cos θ j |j, ↑ j, ↑| + sin θ j |j, ↑ j, ↓| − |j, ↓ j, ↑| + cos θ j |j, ↓ j, ↓| and In Eq. (22), the j = 1 sector operates on span{|1, ↑ , |1, ↓ } ⊗ H p , the j = 2 sector operators on span{|2, ↑ , |2, ↓ } ⊗ H p and the j = 3 sector operates on span{|3, ↑ , |3, ↓ } ⊗ H p .
The effective Hamiltonian acting on the j th sector, H j is defined as δt i ln(W j ), where is the reduced Planck constant. H j takes the form of a one spatial dimensional Dirac Hamiltonian for some particular range of walk parameters [13]. The coin operation is homogeneous and the shift operator is diagonalizable in the momentum basis {|k }, and hence the walk operator is diagonalizable in the same basis.
Denoting ka ask, the eigenvector of H j corresponding to the positive eigenvalue, E j = δt cos −1 (cos θ j cosk) for all j = 1, 2, 3 and the corresponding eigenvectors can be written as where The initial state |Ψ(0) of the neutrino corresponding to electron is prepared using the operator U acting on each sector, This initial state is a momentum eigenstate. After t = integer ×δt steps of the walk using the evolution operator W we get Therefore, the survival probability of the state, |ν e w.r.t. the time evolution is defined by Similarly, the oscillation probabilities of the other flavors are In this scheme, all the states like in Eq. (25) are evolved in the momentum basis as it was used to implement the walk in momentum basis [33]. In the momentum basis the shift operators are diagonal; therefore, for a state in the momentum basis the whole walk operator will just work like a coin operator. However, if one has to implement this walk in a position basis, the wave function has to be distributed across positions because of the relation where 2N + 1 is the total number of sites. For our description, position space has to be periodic or infinite. For real simulation purpose it is reasonable to choose a periodic lattice with the identification N + 1 ≡ −N. For that case, in place of x ∈ a Z we need x ∈ a Z 2N +1 . From the above scheme directly we can tell that one six-dimensional quantum particle can fully simulate the neutrino oscillation mechanism. But experimentally it is difficult to find a six-dimensional system. So, we present potential many particle systems which can simulate neutrino oscillations.
V. NUMERICAL SIMULATION
Simulation of the neutrino oscillation from DTQW can be established by finding a correspondence ideally oneto-one mapping between neutrino oscillation parameters and DTQW evolution parameters.
We need to satisfy two conditions simultaneously to simulate neutrino oscillation : (i) θ j andk should be small in Eq. (22) such that the DTQW produces Dirac Hamiltonian. (ii) Neutrinos are ultra-relativistic particles, so the relationk >> θ j for all j = 1, 2, 3 should be satisfied.
The Dirac equation will be produced when we identify θ j = m j c 2 δt andk = ka = kc δt . Then in comparison with Eq. (10) of the neutrino oscillation, For these kinds of order of δt ,∆θ 2 ij , the number of walk steps is very difficult to achieve in real lattice experiments presently.
We should note that, if we consider the walk time step Hopefully, the oscillation nature is determined by the quantity ωt, where ω = E1−E2 . Only the condition to simulate neutrino oscillation is that ωt will be the same in real experiment as well as in simulation system. It implies that if we increase the frequency ω, then we can decrease the number of walk steps which can be realizable. Thus in order to successfully simulate, we have to increase the value of the cyclic frequency, k2 + θ 2 j − k2 + θ 2 r such that the same oscillation profile can be obtained with a smaller number of walk steps t δt . That is to say, we are zooming in into the frequency and zooming out of the number of DTQW steps.
The Dirac dynamics is only produced by DTQW evolution when θ j andk both are small. Respecting this condition, the numbers of walk steps we have chosen are 450 and 4500 for short and long range oscillation profiles, respectively. With the choices of parametersk = 0.01 rad, θ 1 = 0.001 rad, θ 2 = 0.00615654 rad, θ 3 = 0.0664688 rad. In Fig. 2 we show the neutrino oscillation probability as a function of the number of steps of DTQW. Both the long range and short range neutrino flavor oscillations shown in Fig. 1 obtained from the real neutrino experiment and those from our DTQW simulation, Fig. 2, are matching perfectly.
Instead of running the quantum walker for 4500 and 450 steps in a single run, we can divide the whole profile, respectively, in 450 and 45 runs with each run happening for 10 steps of DTQW. For that case, instead of taking the neutrino flavor state as the initial state for each run, we have to take for the rth run where r ∈ [1, 450] and [1,45], respectively, for long and short range case; the initial state as W (r−1)10 |ν e . Else, we can store the final state, produced at the end of (r − 1)th run, and can start with that state, for the next run. We can further reduce the number of walk steps to obtain the same oscillation profile by going to the non-relativistic regime, where momentum can be neglected w.r.t. the masses of the neutrino [38]. But there, the frequencies of oscillation will be proportional to the linear differences, namely m j − m l , not, as usual, m 2 j − m 2 l . In the previous section we presented the three possible ways of simulation using : (1) a single six-dimensional system, (2) a three-qubit system, or (3) a qubit−qutrit system. All the schemes are equivalent from the numerical simulation perspective, because all the operators are defined by dim{H c ⊗ H p } × dim{H c ⊗ H p } matrices and vectors ∈ H c ⊗ H p , where dim{H c } = 6.
A. Entanglement between spin and position space
In the previous sections we have assumed that all the particles are in the same momentum eigenstate |k , but in reality they can be in a superposition of momentum eigenstates.
For that case, we have to define the electron-neutrino state as |ν e = k p(k, e) |ν k e ⊗ |k = k,j p(k, e)U * ej |ν k j ⊗ |k (40) where |ν k e denotes the spin part of electron neutrino when that is in some particular momentum eigenstate |k . |ν k j is the spin part of the jth mass eigenstate when the neutrino is in some particular momentum eigenstate |k .
Let us consider the initial state of the particle, |ψ(0) = |ν e , then after t δt steps of walk evolution we have the state where ω k j = 1 E j (k); E j (k) is the positive energy eigenvalue of the jth mass eigenstate, when the corresponding momentum eigenvalue k is given by Eq. (25). Similar to the definition, Eq. (40), we can define any general flavor state, The instantaneous density matrix of the system is If we partially trace out the state with respect to the position basis (or, momentum basis), we have the reduced density matrix defined on H c , where W . The expression of the oscillation probability will be modified, where P t (ν e → ν α , k) is the probability, we used in the previous sections, when neutrino selects only one momentum eigenstate.
The above analysis will be the same for any α other than e. When p(k, e) = p(k, α) = δ k,k0 , ρ c (t) = , then the amount of entanglement between position space and internal degrees (spin-space) is always zero, as the partial traced state is pure. Here we will use the measure of the entanglement entropy, where ρ c (t) is a 6 × 6 positive semi-definite unit traced matrix.
≤ log e (6). By considering a Gaussian like distribution function, the probability amplitude is defined as We assumed the distribution (47) is the same for all α = e, µ, τ as the source is the same and propagating through free space without any distortion. Our momentum eigenvalues are confined in some interval, such thatk ∈ [k 0 − ǫ,k 0 + ǫ]. ξ determine the probability weight for the momentum distribution. It is evident from the last expression of Eq. (44) that, increasing the value of interval, means ǫ value will increase the corresponding entanglement entropy. In this sense, entanglement entropy can be used as a measure of the neutrino wave packet span in momentum space. Larger and smaller entropy implies larger span and smaller span in momentum space, respectively. The wave function description in lattice space can be obtained by the Fourier transformation in momentum space, Eq. (32), and the span in position space will be opposite to the span in momentum space. So, larger entanglement means less uncertainty in measuring instantaneous position, and a more particle-like (localized entity) nature.
We would like to point out that the "delocalization of the neutrino states" discussed in the literature [39,40], is related to the undetectability of the oscillation profile, when the oscillation wavelength (which is directly proportional to the central momentum of the wave packet, ) is smaller than the spread of the neutrino wave packet in position space. But in our case, we show the relation of entanglement among spin and space with the amount of wavepacket spreading or delocalization. This wavepacket spread in position space is a property of the spatial distribution of the neutrino source wave fuction and this is uncorrelated with the neutrino oscillation wavelength.
In Fig. 3 , we have plotted the entanglement entropy as a function of walk steps, for different value of parameter ǫ = 0.02, 0.05, 0.15 with the interval ink = 0.001. For the numerical simulation,k 0 = 0.01 rad, ξ = 100 has been used.
From numerical simulations it is observed that for a large number of steps the measure of entanglement is almost saturating to a fixed value and with increase in ǫ value the entanglement entropy saturates faster at higher value. This is a sign of the constant coupling between position space and internal degrees of freedom. For a time varying coupling in the Hamiltonian, we can expect a deviation from saturation.
B.
Correlation between position space and particular flavor The spin part of the α−flavor neutrino can be defined by tracing out the momentum part; is a mixed state in general. Hence, from Eq. (43), considering the projection of Tr k |ν α ν α | , on the instantaneous state and tracing out the spin part will give a reduced density matrix corresponding to the α-flavor neutrino state, The entropy measure, captures a correlation between the α-flavor and positionspace (or momentum-space). In Eq. (49) we are not taking the trace over the whole coin space, we are projecting on a mixed state k |p(α, k)| 2 |ν k α ν k α |, so this entropy is not actually the entanglement measure between α-flavor and position space. However, we can claim that this entropy can still be used as a correlation measure, particularly, to comparatively understand the trend of correlations of different flavor with the position space. In Fig. 4 we show this measure of correlation when, α = e, µ, τ as a function of steps of walk evolution when the value of ǫ = 0.01. In Fig. 4 we see the increase in entropy in the beginning with increase in the number of steps and later all the three flavors show an identical trend in decrease and increase of the entropy around the mean value. When the same measure is cosidered for a very large number of steps, shown in Fig. 5, we see fluctuations around the mean value without any well-defined pattern in fluctuation and these fluctuations show an identical trend for all flavors. From this we can say that each flavor is equally correlated with the position space during the propagation. In Ref. [41] it is shown that the entanglement between the coin and position space of DTQW with strongly localization but not being localized at one node (spatial disorder walk) is smaller when compared to the wide spread localized state (temporal disordered walk). Therefore, from the absence of zero correlation at any point of time we can conclude that the neutrino flavor is not localized in position space at any given point. However, the degree of delocalization can be varied by changing the value of ǫ. A comparatively higher correlation would mean a more widely spread wave packet.
VII. CONCLUSION
Neutrinos are very weakly interacting particles, so if the detectors are large in size detection of a significant number of the neutrinos is possible. For neutrino oscillation experiments various kinds of detector are used; for example, the detector Super-Kamiokande [34] uses 50,000 tons of ultra-pure water and Sudbury neutrino observatory (SNO) [35] uses 1000 tonnes of ultra-pure heavy water. Simulating neutrino oscillation and other high energy phenomena in a low energy experimental set-up gives access to intricate features of the dynamics, which is not easy in a high energy set-up. In this work we have shown that the three flavor neutrino oscillation obtained from this massive experimental set-up can be simulated using a DTQW system with a set of walk evolution parameters. Using DTQW, short range oscillations and long range oscillations have been obtained by simply varying the number of steps of the walk. DTQW has been experimentally implemented using trapped ions [17], cold atoms [18], NMR [19], and photons [20]; therefore, neutrino oscillations can be simulated in any of these systems. In addition to simulating neutrino oscillation, our work indicates that the quantum walk can play an important role in simulating and understanding dynamics of various other physical processes in nature. With these simulations mapping to real experimental measurements gives us access to exploring quantum correlations like entanglement and understanding the neutrino physics and high energy physics in general from the quantum information perspective. Here we introduced a correlation measure between flavor and position space that will give information as regards the spatial degrees of freedom of the neutrino, by detection of a particular flavor. Simulating a high energy quantum dynamics in a low energy quantum system and understanding physical phenomena from the quantum information theory perspective is an important topic of interest in contemporary research. A preprint of this paper, arXiv:1604.04233 [38] has already motivated research in this direction by considering the extension towards simulation of the neutrino oscillation in matter using DTQW [42] without overlapping with the results shown in this paper. | 6,930 | 2016-04-14T00:00:00.000 | [
"Physics"
] |
Experimental results for active control of multimodal vibrations by optimally placed piezoelectric actuators
Vibration damping is an effective strategy to enhance the life-cycle and performance of mechanical components. In this regard passive control systems involve lower costs and are easier to implement but their bandwidth is limited, whereas active systems provide larger bandwidth and higher adaptability to dynamic loads but higher costs and complexity are required. The recent advances in smart materials promoted the development of smart structures suitable for vibration damping and control. Between them the piezoelectric systems seem to be the most promising, however their efficiency relies on their placement. In a previous work the authors proposed and validated an analytical method to detect the optimal location of piezoelectric plates to control the multi-modal vibrations of a cantilever beam. Recent findings show that, if all actuators are activated simultaneously, the optimization problem can be traced back to the determination of the optimal potential distribution on all the piezoelectric actuators. In this paper the above method is taken into account and applied to a cantilever beam with 13 pairs of surface mounted PZT plates under the excitation provided by an electrodynamic shaker. The experimental damping of two flexural modes combinations has been performed by means of a special-purpose workbench and the assessment of the damping efficiency has been measured by means of a micro I.C.P. accelerometer. The results showed that the multimode vibrations of the cantilever beam can be efficiently damped if the potential distribution on all the PZT plates is optimized.
Introduction
The vibration control is one of the main challenges in many engineering fields.The structural vibrations may weaken the fatigue resistance and decrease the life-cycle of mechanical parts, so as to lead to the failure in the worst cases.In gas turbine engines , e.g., the vibrations arise from the interaction between the fluid and the blades [1,2], therefore many damping strategies have been proposed and implemented in order to increase the components integrity.In recent decades the smart materials have been studied for vibration control applications and, among them, the piezoelectric materials are considered the most promising because of their fast dynamic response.Several studies have shown that the efficiency of the piezoelectric elements can be enhanced by optimizing their placement on the structure [3,4].Previous studies from the authors [5][6][7][8] proposed an analytical model to optimize the piezoelectric elements placement to control the multimode vibrations of a cantilever beam.Subsequently this model has been further implemented in [9] with all the piezo actuators continuously activated and the optimization problem has been traced back to the identification of the optimal potential distribution on the elements that cover the beam.In this work the method has been experimentally tested by means of 13 pairs of surface mounted PZT plates and an electrodynamic shaker.The outcomes confirm that the multimode vibrations of a beam could be efficiently damped if the best potential distribution on all the PZT actuators is provided.
Experimental setup
In order to verify the proposed model an aluminum beam, covered with 13 pairs of PZT plates (Tab.1) was attached to a shaker by means of a purposely designed joint.The PZT plates and the shaker are supplied with a voltage composed of two harmonics (Eq.1) by two independent arbitrary function generators, so that two flexural modes can be elicited simultaneously: where v 0 ranges from 0 to 20 V, ω i 1 , ω i 2 are the modal frequencies considered, r ranges from 0 to 1 and identifies the prevalence of a mode respect to the other.Referring to Fig. 1 the arbitrary function generator AFG1 provide v piezo (t) (that can be magnified up to ±300 V peak-to-peak ) to the PZT plates and AFG2, amplified by the power amplifier (SignalForce GW-PA300E) to the shaker.Every piezo plate is linked to a mechanical relay that can switch the sign of the V piezo (t) by means of a microcontroller configured in a LabView environment.A data acquisition board (NI USB 6251) collect and elaborate the signal of a micro I.C.P. accelerometer (PCB 352A56) fixed on the free end of the beam.In such a way is possible to assess the damping efficacy of any potential distribution and compare them to verify if the maximum damping efficacy (i.e. the minimum displacement of the free end) is achieved when the theoretical optimal potential distribution is selected.The overall measurement setup uncertainty described before is estimated to be 5 %.https://doi.org/10.1051/matecconf/201821120001VETOMAC XIV
Results and discussion
In this section the experimental results of the optimal potential distributions for multimode vibrations are reported and briefly commented.The dimensionless ζ 1 , ζ 2 represent the points of the potential sign change (as illustrated in the example of Fig. 2).Since the beam is covered by 13 PZT plates couples, a large number of different potential distributions has been tested for each coupling modes.[9].It is possible to observe that the experimental optimal potential distributions (the red region) are in good agreement with the theoretical results and all the potential sign changes are well reproduced.Furthermore the piezoelectric plates are always activated, regardless of the coupling modes elicited by the shaker.In such a way the damping efficiency of the PZT plates is enhanced.
Conclusion
In this work, the multimodal vibrations of an aluminium beam, induced by an electrodynamic shaker, were damped efficiently by means of 13 couples of PZT plates.The best damping action was found to be strictly related to the optimal distribution of the potential on the piezoelectric plates.Future works will focus on the development of a suitable experimental setup to verify the model extension in case of rotating beams, in order to assess the optimal actuators placement, e.g. for turbomachinery blade vibrations damping.
Figure 2 .
Figure 2. Example of a potential distribution acting on the PZT actuators.
Figure 3 .Figure 4 .Figure 5 .
Figure 3. Experimental results for the coupling between the first and second modes for five values of r.The top left figure shows the theoretical optimal potential distributions.
Table 1 .
Dimensions (in mm) of the aluminium beam and PZT plates. | 1,400 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
A Novel Molecular Signature of Cancer-Associated Fibroblasts Predicts Prognosis and Immunotherapy Response in Pancreatic Cancer
Cancer-associated fibroblasts (CAFs), a prominent population of stromal cells, play a crucial role in tumor progression, prognosis, and treatment response. However, the relationship among CAF-based molecular signatures, clinical outcomes, and tumor microenvironment infiltration remains largely elusive in pancreatic cancer (PC). Here, we collected multicenter PC data and performed integrated analysis to investigate the role of CAF-related genes (CRGs) in PC. Firstly, we demonstrated that α-SMA+ CAFs were the most prominent stromal components and correlated with the poor survival rates of PC patients in our tissue microarrays. Then, we discriminated two diverse molecular subtypes (CAF clusters A and B) and revealed the significant differences in the tumor immune microenvironment (TME), four reported CAF subpopulations, clinical characteristics, and prognosis in PC samples. Furthermore, we analyzed their association with the immunotherapy response of PC patients. Lastly, a CRG score was constructed to predict prognosis, immunotherapy responses, and chemosensitivity in pancreatic cancer patients. In summary, these findings provide insights into further research targeting CAFs and their TME, and they pave a new road for the prognosis evaluation and individualized treatment of PC patients.
Introduction
The poor prognosis of pancreatic cancer (PC) urges us to more deeply understand its potential molecular mechanism and seek better therapies. Cancer-associated fibroblasts (CAFs), a significant fraction of the pancreatic cancer stroma, contribute to a dense stromal accumulation in PC [1,2]. Previous studies have demonstrated that CAFs can facilitate the malignant phenotypes of tumors, particularly tumorigenesis and invasion, inflammation, and extracellular matrix (ECM) remodeling [3,4]. As the leading participant of the desmoplastic stroma in PC, CAFs play a crucial part in diverse clinical responses, drug tolerance, and the tumor immunosuppressive environment by producing ECM proteins and cytokines and interacting with cancer cells [5][6][7][8]. The intratumoral heterogeneity of CAFs in the stroma of PC has been extensively studied. Four major distinct subpopulations of CAFs have been demonstrated in PC: (1) myCAF, the myofibroblastic subset (myCAF) characterized by smooth muscle actin expression, high transforming growth factor (TGF) signaling, and ECM components [1,9]; (2) iCAF, the inflammatory subset characterized by high expressions of inflammatory mediators [10]; (3) apcCAF, the antigen-presenting subset characterized by the expression of CD74 and MHC class II [11]; and (4) meCAF, the highly activated metabolic subset characterized by high expression of PLA2G2A and CRABP2 [12]. Some CAF markers have been studied separately in the past few years, such as α-smooth muscle actin (α-SMA), fibroblast activation protein, CD29, fibroblast-specific protein 1, platelet-derived growth factor receptor B, and podoplanin [13]. For instance, tumors accumulated with the α-SMA + fibroblasts have worse prognoses and higher invasiveness, and they can affect therapeutic reactions [13,14]. Fibroblast activation protein-positive CAFs can lead to immunosuppression and resistance to immunotherapy [15]. However, whether the CAF-mediated tumor microenvironment (TME) is associated with tumor characteristics and the underlying molecular mechanism remains unclear [16,17]. In addition, the guiding significance of current pathological and molecular classification for PC treatment is limited [18]. Although existing strategies targeting the stroma have suppressed tumor growth and enhanced treatment responses in the mouse model, clinical trials have not yet produced promising results [6,19]. Some of these strategies even lead to tumor recurrence and metastasis [20], which suggests that accurate identification of the CAF molecular subtypes in PC is necessary to apply stromal-targeting therapies efficiently.
This study determined that PC patients with accumulated α-SMA + CAFs had a poor prognosis regarding tissue microarrays, and that myCAF, apcCAF, and meCAF subsets were highly enriched in PC. Then, we thoroughly estimated the expression profiles of CAF-related genes (CRGs) and their influences on prognosis, clinical features, and immune cell infiltration in PC patients. Furthermore, we constructed a CRG score to predict PC patients' prognoses, clinical outcomes, immunotherapy responses, and chemosensitivity. Our findings could deepen our understanding of CRGs and smooth the way for prognosis evaluation and personalized therapy strategies in PC patients.
α-SMA + CAFs Accumulate in PC Tissues with Worse Prognoses
Previous studies have shown that α-SMA is a marker of activated CAFs and an efficacy evaluation indicator of targeted CAF therapy [21,22]. The Kaplan-Meier curves showed worse overall survival (OS) in patients with high α-SMA + CAFs accumulated in PC ( Figure 1F). Immunofluorescence staining, immunohistochemistry, and Masson staining were used to further confirm the CAF population in PC tissue microarrays ( Figure 1A-E). The results explain that α-SMA + CAFs, as a prominent desmoplastic stroma, were remarkably enriched in PC tissues. To define the substantial proportion of CAFs in human PC tissues and mousederived allografts, we excluded hematopoietic and epithelial cells using flow cytometry analysis with CD45 and EpCAM markers. We also identified CAFs using human fibroblast markers (integrin b1/CD29) and mouse fibroblast marker (PDPN). The fresh human samples included PC patients at the time of surgery before any treatment. Figure 1G-J illustrate that the CAFs accounted for about 30% of all cellular populations in human and mouse tumor tissues. We further assessed the enrichment scores of four reported CAF subtypes in pancreatic tumors and normal pancreas samples from TCGA-PAAD and GTEx cohorts, and we found that myCAF, apcCAF, and meCAF were abundant in the tumor samples ( Figure 2A). These results suggest that CAFs are essential components of TME in PC, which may modulate tumorigenesis and progression.
Genetic Mutation Landscape of CRGs in PC
We first determined the expression levels of the 25 CRGs in tumor specimens and normal specimens, and we observed that almost all CRGs were abundant in the tumor specimens ( Figure 2B). To reveal the interaction of CRGs, we performed a PPI analysis. Figure 2C displayed that COL1A1, COL11A1, COL3A1, COL5A2, COL1A2, FN1, FAP, CDH1, POSTN, COMP, COL5A1, COL10A1, and THBS2 were hub genes. Furthermore, we identified the total frequency of somatic mutations and copy number variations (CNVs) of the 25 CRGs in PC. As depicted in Figure 2D, 16 of 158 (10.13%) PC samples emerged with genetic mutations. Figure 2D also indicated that, among the 25 CRGs, VCAN, FN1, and COL11A1 were the genes with the highest mutation rate, followed by COL5A1 and CDH1. In addition, we demonstrated evident CNV alterations of the 25 CRGs ( Figure 2E). We also analyzed the CNV alteration location of the 25 CRGs on chromosomes using the "circlize" R package ( Figure 2F). We concluded that CNV might act in regulating the expression of 25 CRGs. These findings reveal significant differences in the genomic background and expression levels of the 25 CRGs between PC and normal samples, implying the latent roles of the 25 CRGs in PC tumor progression.
Genetic Mutation Landscape of CRGs in PC
We first determined the expression levels of the 25 CRGs in tumor specimens and normal specimens, and we observed that almost all CRGs were abundant in the tumor specimens ( Figure 2B). To reveal the interaction of CRGs, we performed a PPI analysis. Figure 2C displayed that COL1A1, COL11A1, COL3A1, COL5A2, COL1A2, FN1, FAP, CDH1, POSTN, COMP, COL5A1, COL10A1, and THBS2 were hub genes. Furthermore, we identified the total frequency of somatic mutations and copy number variations (CNVs) of the 25 CRGs in PC. As depicted in Figure 2D, 16 of 158 (10.13%) PC samples emerged with genetic mutations. Figure 2D also indicated that, among the 25 CRGs, VCAN, FN1, and COL11A1 were the genes with the highest mutation rate, followed by COL5A1 and CDH1. In addition, we demonstrated evident CNV alterations of the 25
Identification of CAF Subtypes and Characteristics of the TME in PC
To better understand the expression pattern of CRGs in tumorigenesis, we performed a subsequent analysis of 160 PC patients from TCGA-PAAD. Table S2 lists detailed information about these patients. We further performed a consensus clustering analysis to investigate the relationships between the expression pattern of CRGs and PC subtypes, and we classified PC patients according to the expression levels of these CRGs. Our findings indicate that k = 2 is an optimal choice to divide the entire cohort into CAF cluster A (n = 130) and CAF cluster B (n = 30) ( Figures 3A and S1). Moreover, we used the ICGC cohort to verify the repeatability of the clustering. We also conducted a consensus clustering analysis on this cohort and classified the cohort into two distinct subtypes ( Figure S2A,B). Patients with CAF cluster A had worse OS than patients with CAF clus-ter B in both TCGA and ICGC cohorts ( Figures 3B and S2C). We further dissected the CAF signature of the patients in two CAF subtypes. The expression of the CAF signature in CAF cluster A was substantially higher than in cluster B ( Figure 3C). Figure 3D presents the relevant networks of CRG interactions and regulator connections. It also illustrates the prognostic value of CRGs (Table S3) and the enrichment of the CRG-related KEGG pathways (Table S4) in PC patients. Additionally, significant differences in the genomic expression of CRGs and clinical variables were observed between the two CAF clusters ( Figure 3E). Apart from the differences in prognosis and genome between CAF cluster A and CAF cluster B, there were also distinct discrepancies in immune cell infiltration and TME score between them. Firstly, we observed higher enrichment scores of myCAF and apcCAF in the CAF cluster A group ( Figure 3F). To investigate the roles of CRGs in the TME of PC, Correlations between the two CAF clusters and TME score. (H) The infiltration abundance of 33 TME cells of two CAF subtypes in PC. The Wilcoxon test analyzed the statistical differences between the two clusters (*** p < 0.001, ** p < 0.01, * p < 0.05, and not significant (p > 0.05)).
Apart from the differences in prognosis and genome between CAF cluster A and CAF cluster B, there were also distinct discrepancies in immune cell infiltration and TME score between them. Firstly, we observed higher enrichment scores of myCAF and apcCAF in the CAF cluster A group ( Figure 3F). To investigate the roles of CRGs in the TME of PC, we then evaluated the association among the two CAF clusters, 33 immune cell subtypes, and the TME score (Table S5). Compared with CAF cluster B, CAF cluster A had higher immune and stromal scores ( Figure 3G) and higher infiltration levels of immunosuppressive cells, such as regulatory T cells (Tregs), MDSC cells, and DC cells, and other immunosuppressive factors, such as TGF-β-associated ECM ( Figure 3H). More importantly, we detected a higher enrichment score of anti-PD-1-resistant signatures and a lower enrichment score of nivolumab-responsive signatures in CAF cluster A ( Figure 3H), indicating that patients in the CAF cluster A group may be less sensitive to immunotherapy. These results imply that the CAF cluster A group may be closely associated with stromal activation and immunosuppression features.
Establishment and Verification of the Prognostic CRG Score
The CRG score was created according to the LASSO and multivariate Cox (multiCox) analysis for 25 CRGs. Eventually, we obtained five hub genes (VCAN, COL1A2, ZNF469, SPARC, and FNDC1). The CRG score was calculated as follows: Figure 4A displays the distribution of patients in the two CAF clusters and two CRGscore groups. Compared with alive patients, the CRG score was significantly elevated in patients who died during follow-up ( Figure 4B), and CAF subtype A had higher CRG scores ( Figure 4C). The risk plot of the CRG score indicated that, with an increasing CRG score, OS time decreased while mortality rose ( Figure 4D,G). Patients with higher CRG scores in both categories were associated with worse survival rates ( Figure 4E). Additionally, the AUC values of 1-, 2-, and 3-year OS were 0.63, 0.659, and 0.638, respectively ( Figure 4F). Moreover, the CRG score retained excellent predictability in assessing the prognosis of PC patients ( Figure 4H). Among multiple clinical features, multivariate Cox regression modeling proved that the CRG score was the only independent risk factor for the OS of PC patients in the TCGA cohort ( Figure 4I).
Characteristics of the TME and Function Enrichment in Distinct Subgroups
To examine the association between CRG score and the TME of PC, we analyzed their immune microenvironment in detail. As confirmed by different methods, the CRG score was positively associated with M1 macrophages and neutrophils, whereas it was negatively related to B cells, NK cells, CD8 T cells, and CD4 T cells ( Figure 5A). Moreover, we sought to explore the potential pathways related to the CRG score using GSVA. Several cancer-associated pathways (P53, Notch, and ERBB pathways) were most closely correlated with the CRG score ( Figure 5B). Consistently, we found that the enrichment levels of B cells, plasma cells, CD8 T cells, and CD4 T cells were markedly higher in the low-CRGscore group than in the high-CRG-score group ( Figure 5C). Figure 5D reveals a higher enrichment score of meCAF in the low-CRG-score group. Furthermore, time-dependent receiver operating characteristic (tROC) analysis showed that the CRG score was the most accurate predictor for overall survival compared with other single-CAF subsets in PC ( Figure 5E). These findings indicate that patients with lower CRG scores had higher meCAF accumulation and more immune cell infiltration.
Characteristics of the TME and Function Enrichment in Distinct Subgroups
To examine the association between CRG score and the TME of PC, we analyzed their immune microenvironment in detail. As confirmed by different methods, the CRG score was positively associated with M1 macrophages and neutrophils, whereas it was negatively related to B cells, NK cells, CD8 T cells, and CD4 T cells ( Figure 5A). Moreover, we sought to explore the potential pathways related to the CRG score using GSVA. Several cancer-associated pathways (P53, Notch, and ERBB pathways) were most closely correlated with the CRG score ( Figure 5B). Consistently, we found that the enrichment levels of B cells, plasma cells, CD8 T cells, and CD4 T cells were markedly higher in the low-CRG-score group than in the high-CRG-score group ( Figure 5C). Figure 5D reveals a
Association of the CRG Score with Tumor Mutation Burden (TMB) and Mutation
Previous studies have indicated that TMB is a valuable predictor of survival outcomes and immunotherapy response in tumor patients [23]. We explored the distribution alternations of somatic mutations between two CRG-score groups in TCGA cohort ( Figure 6A,B). Patients with high CRG scores had substantially higher frequencies of TP53, KRAS, CDKN2, SMAD4, and TTN mutations than patients with low CRG scores, implying that these gene mutations were in charge of the poor prognosis of PC patients with high CRG scores. However, we observed opposite results regarding the mutation levels of RNF43, MUC16, and RYR1 ( Figure 6A,B). In addition, our analysis of the mutation data demonstrated a higher TMB score in the high-CRG-score group compared with the low-CRG-score group ( Figure 6C).
higher enrichment score of meCAF in the low-CRG-score group. Furthermore, time-dependent receiver operating characteristic (tROC) analysis showed that the CRG score was the most accurate predictor for overall survival compared with other single-CAF subsets in PC ( Figure 5E). These findings indicate that patients with lower CRG scores had higher meCAF accumulation and more immune cell infiltration. The enrichment score of four CAF subsets between the low-CRG-score group and the high-CRG-score group. (E) tROC analysis showed that the GRC score was an accurate variable for survival prediction. The Wilcoxon test analyzed the statistical differences between the two clusters (** p < 0.01 and not significant (* p > 0.05)).
Association of the CRG Score with Tumor Mutation Burden (TMB) and Mutation
Previous studies have indicated that TMB is a valuable predictor of survival outcomes and immunotherapy response in tumor patients [23]. We explored the distribution alternations of somatic mutations between two CRG-score groups in TCGA cohort ( Figure 6A,B). Patients with high CRG scores had substantially higher frequencies of TP53, KRAS, CDKN2, SMAD4, and TTN mutations than patients with low CRG scores, implying that these gene mutations were in charge of the poor prognosis of PC patients with high CRG scores. However, we observed opposite results regarding the mutation levels of RNF43, MUC16, and RYR1 ( Figure 6A,B). In addition, our analysis of the mutation data demonstrated a higher TMB score in the high-CRG-score group compared with the low-CRGscore group ( Figure 6C). The enrichment score of four CAF subsets between the low-CRG-score group and the high-CRG-score group. (E) tROC analysis showed that the GRC score was an accurate variable for survival prediction. The Wilcoxon test analyzed the statistical differences between the two clusters (** p < 0.01 and not significant (* p > 0.05)).
Clinical Outcomes and Drug Susceptibility Analysis
We investigated the CRG score's ability to predict the impact of initial surgical treatment in PC patients. As displayed in Figure 6D,E, among the patients receiving the initial therapy of surgery, those with lower CRG scores showed significant treatment advantages.
Subsequently, to explore the efficacy of the CRG score as a biomarker for predicting chemotherapeutic susceptibility in PC patients, we assessed the semi-inhibitory concentration of 138 chemotherapeutic drugs commonly used to treat tumors. We identified 27 drugs more sensitive to patients with low CRG scores (Table S6), including EHT.1864 and PD.173074 (p < 0.01; Figure 6F,G). Nevertheless, 15 drugs responded better to patients with high CRG scores (Table S7), including paclitaxel and lapatinib (p < 0.01; Figure 6H,I). In brief, these findings suggest that the CRG score is associated with drug sensitivity.
Protein Expression Level of CAF-Related Risk Genes and Survival Analysis
To validate the tissue expression of risk CRGs in pancreatic normal and tumor tissues, we obtained immunohistochemical results from the Human Protein Atlas (HPA). Except for ZNF469, which is not available in the HPA database, consistent with the mRNA level in Figure 2B, protein expressions of VCAN ( Figure 7A
Clinical Outcomes and Drug Susceptibility Analysis
We investigated the CRG score's ability to predict the impact of initial surgical treatment in PC patients. As displayed in Figure 6D,E, among the patients receiving the initial
Discussion
Immune and stromal cells, the essential TME components, are associated with the clinical features and prognosis of PC [24][25][26]. Extensive stromal involvement is a crucial hallmark of PC, which makes it challenging to obtain accurate tumor-specific molecular information [24]. Early studies identified that CAFs, a substantial portion of the tumor microenvironment, drove tumorigenesis and treatment resistance [6,27,28]. Previous studies revealed how CAF patterns affect the characteristics of TME and the efficacy of immunotherapy in triple-negative breast cancer (TNBC) [29]. With the development of tumor immunology and molecular biology research, immunotherapies, such as immune checkpoint inhibitors, have become new treatments for various tumors [30,31]. Recently, anti-PD-1/PD-L1 therapy has led to outstanding achievements in many malignancies [32,33]. However, due to the dense extracellular matrix acting as a physical barrier, PC patients remain poorly responsive to PD-1 antibodies [32,34]. Moreover, single-cell analysis also revealed that TGF-β-myCAF subtypes are related to the resistance to immunotherapy in breast cancer [35]. Whether analyzing CAF molecular subtypes improves the clinical response of PC remains to be determined [25]. Despite several studies having identified various biomarkers and clinical factors to predict PC prognosis [24,36], the relationship among CAF-based molecular signature, clinical outcome, and tumor microenvironment infiltration remains largely elusive in PC. Here, our study found an abundance of myCAF, apcCAF, and meCAF in the tumor tissues of PC. We also identified the alterations in genomic backgrounds and expression levels of CAF-related genes based on TCGA, GTEx, and ICGC cohorts. Most of the expressions of CRGs were increased in PC tumor tissues and correlated with prognosis. The aggregation of gene mutations leads to carcinogenesis, and gene mutations in PC may significantly impact immunotherapy response [37]. Among 25 CRGs, VCAN, FN1, and COL11A had the highest mutational intensity. However, there are currently no reports that these mutations are associated with carcinogenesis or fibrosis.
Additionally, we divided PC patients into two CAF clusters and observed discrepant prognoses, clinical characteristics, and immune infiltrations between them. The interaction of CAFs and immunity is a critical feature of tumorigenesis, which can serve as a therapeutic target for PC. Diverse CAF subsets play distinct roles in tumor immunosuppression of breast cancer. Their effects are achieved by Tregs regulating the proliferation of effector T cells [8]. Our findings showed that CAF cluster A, with a high enrichment of myCAF and apcCAF, had significantly higher stromal and immune scores than CAF cluster B. Cluster A also had higher infiltration levels of immunosuppressive cells, such as Tregs, MDSC cells, and DC cells, and other immunosuppressive factors, such as TGF-βassociated-ECM. Previous studies have shown that myCAF is the main component of the ECM [10]. Furthermore, apcCAF potentially modulates the immune response in pancreatic tumors [11]. Our results imply that CAF cluster A group may be closely associated with stromal activation and immunosuppression features, and myCAF and apcCAF abundance may be the main factors underlying such an immunosuppressive microenvironment. More interestingly, the higher enrichment score of anti-PD-1-resistant signatures and the lower enrichment score of nivolumab-responsive signatures were also observed in the CAF cluster A group, indicating that patients in the CAF cluster A group may be less sensitive to immunotherapy. Furthermore, the CRG score was constructed to quantify CAF subtypes. The CAF subtype A with worse survival had higher CRG scores. Patients with higher CRG scores also had worse OS, implying that high CRG scores could predict an unfavorable prognosis. By integrating the CRG scores and clinical characteristics, we demonstrated that the CRG score was a unique, independent risk factor of OS. Moreover, we found a higher enrichment of meCAF in PC patients with low CRG scores. Our previous research showed that PDAC patients with abundant meCAF had a dramatically better response to immunotherapy [12]. Consistent with the previous conclusion, the CAF cluster B with better survival had a lower CRG score, and it also had a lower enrichment score of anti-PD-1-resistant signatures and a higher enrichment score of nivolumab-responsive signatures. These findings indicate that CRGs may participate in tumor immunosuppression; therefore, patients with low CRG scores can benefit from immunotherapy.
Due to PC patients' distinctive molecular and clinical features, it is necessary to classify them precisely. We further identified potentially sensitive drugs in patients in different CRG-score groups. We expected that targeting CAFs combined with these drugs could reduce drug resistance and improve clinical outcomes.
Human Tissue Specimens
The Human Ethics Committee of Shanghai Renji Hospital, Shanghai Jiao Tong University School of Medicine (Shanghai, China), reviewed and approved research on human pancreatic cancer under informed consent from all patients.
Cell Lines and Mouse Pancreatic Cancer Model
Mouse pancreatic cancer cell lines KPC1199 were obtained from Jing Xue lab (Shanghai, China) and cultured in DMEM with 10% FBS. A total of 1 × 10 6 KPC1199 cells were resuspended in 100 µL of PBS and injected subcutaneously into 6-week-old female C57BL/6 mice from Shanghai Laboratory Animal Center. The tumor tissues were ultimately weighed in 15 days and collected for flow cytometry analysis.
Masson's Trichrome Staining
Formalin-fixed tissues were immersed in paraffin, and 5 µM sections were stained with Masson trichrome reagent to show collagen. First, the samples were partially dewaxed and rehydrated, fixed in Bouin's liquor, and then washed and rinsed in distilled water overnight. Next, the slides were stained in Meyer hematoxylin solution for 5 min, and then placed in 0.5% hydrochloric acid and 70% ethanol for 5 s. After the specimens were washed three times and dissolved in 1% phosphomolybdic acid aqueous solution, the slides were stained with aniline blue or bright green for 5 min. Subsequently, we dehydrated the samples in 95% ethanol 10 times and added xylene to make them transparent. Finally, all slides were scanned and digitized using the digital pathological slice scanner system (Leica Biosystems Wetzlar, Germany). The collagen fibers were dyed blue, the nuclei were black, and the background was red.
Immunohistochemistry and Tissue Microarrays
Tissue microarrays included 91 PC samples. The histopathology of all cancer specimens was reassessed, and representative regions were labeled. Table S1 lists the prognosis information of individual patients in tissue microarrays. The immunohistochemistry staining was carried out to identify α-SMA and EpCAM marker expressions in tissue microarrays. As previously described, we conducted a semi-quantitative scoring system according to the distinct percentages of positively stained cells and staining intensity [38]. The frequency of positively stained cells was defined as 1+ (less than 25%), 2+ (25% to 50%), 3+ (50% to 75%), or 4+ (greater than 75%). Additionally, the intensity was scored as 0 (negative), 1+ (weak), 2+ (moderate), or 3+ (strong).
Flow Cytometry Analysis
Briefly, the fresh tissue samples from two human PC tissues and seven mouse-derived allografts of KPC1199 cells were mechanically chopped and digested by collagenase IV at 37 • C for 30 min. The digested suspension was combined with DNase at room temperature for 5 min, washed twice with phosphate-buffered saline buffer containing 2% serum, and then filtered through the 100 µm filter. We used markers of CD29, PDPN, EpCAM, and CD45 to separate tumor epithelial cells (CD45 − CD29 − or PDPN − EpCAM + ), CAFs (EpCAM − CD45 − CD29 − or PDPN + ), and tissue leukocytes (CD29 − or PDPN − EpCAM − CD45 + ) in human and mouse tumor specimens, respectively. The digested single cells were washed twice and centrifuged for 5 min at 500× g, and then 1 µg/mL of antibody was added. Then, the samples were kept at 4 • C for 30 min in a dark place. Flow cytometry was employed using a BD Flow Cytometry Analysis Celesta cell sorter (Becton Dickinson, New York, NY, USA). The sidescatter width versus side-scatter region and the forward-scatter width versus forward-scatter height were applied to remove dead cells and cell clumps. Antibodies including anti-EpCAM-PerCR/Cy5.5 (BioLegend, San Diego, CA, USA, #324214), anti-CD45-APC-Cy7 (BioLegend, #368515), and anti-CD29-Alexa Fluor ® 488 (BioLegend, # 303015) were verified according to the manufacturer's website.
Consensus Clustering Analysis of CRGs
Initially, 25 CRGs were identified from previous studies [24]. Then, we used the "ConsensusClusterPlus" package [39,40] to perform a consensus clustering analysis by the k-means algorithm to identify different CAF-associated subtypes. Furthermore, a proteinprotein interaction (PPI) analysis through the string website (https://cn.string-db.org/ (accessed on 1 January 2022) was constructed to determine the interplay of CRGs.
Correlations of Molecular Patterns with the Clinical Features and Prognosis of PC
We explored the correlation of molecular subtypes, clinical variables, and survival outcomes to assess the clinical value of the two CAF subtypes. The clinical features included age (≥65 and <65 years), gender (male and female), tumor location (left and right side), TNM stage (stage I-IV), KRAS mutation status (abnormal and normal), and TP53 mutation status (abnormal and normal). In addition, the differences in OS between the two subtypes were estimated by Kaplan-Meier analysis using the "survival" and "survminer" packages [41].
Association of Molecular Subtypes with Tumor Immune Microenvironment of PC
We assessed PC patients' immune and stromal scores using the ESTIMATE algorithm [42]. Then, the infiltrating fractions of 33 immune cell subtypes and four CAF subsets of each patient were computed with a single-sample gene set enrichment (ssGSEA) analysis algorithm [43].
Relationship of CAF Subtypes with Immunotherapy Responses in PC
As CAFs play a vital role in regulating tumor immune evasion [44], to further determine the association of CAF subtypes with immunotherapy responses of PC, we performed ssGSEA analysis to dissect the gene expression profiles of immunotherapy responses in the PC patients according to the nivolumab-responsive and anti-PD-1-resistant signatures [45].
Development of the CAF-Related Gene Risk Signature
We performed univariate and multivariate Cox regression analysis in TCGA-PAAD cohort to construct a novel CRG-based signature for predicting prognosis. Initially, the univariate Cox regression analysis was employed to assess the prognostic values of 25 CRGs in PC patients. Then, p < 0.05 was selected as a screening threshold, and 22 prognostic CRGs related to the survival of patients with pancreatic cancer were screened out. We further performed a multivariate Cox regression analysis and the penalized Cox regression model with the least absolute shrinkage and selection operator (LASSO) according to 22 prognostic CRGs. We obtained five hub prognostic CRGs (VCAN, COL1A2, ZNF469, SPARC, and FNDC1) and their corresponding coefficients. Finally, a scoring algorithm named CRG score was established to quantify the CAF state at the transcriptomic level. The CRG score was calculated as follows: CRG score = Σ (expression × correlative coefficient).
Additionally, we executed tROC curve analysis and independent prognostic analysis to validate the predictive capability of this novel CRGs-based signature and other clinical variables at 1-, 2-, and 3-year OS using the R package "survivalROC".
Expression Level Validation of CAF-Related Risk Gene Expression
Immunohistochemical results of CRGs involved in the risk signature were obtained from the HPA database (https://www.proteinatlas.org/ (accessed on 1 January2022) to validate CRG expression in normal and tumor tissue.
Identification of Immune Microenvironment Affected by CRGs
We investigated the association between CRG score and immune microenvironment with several common methods, including XCELL, TIMER, QUANTISEQ, MCPOUNTER, EPIC, CIBERSORT-ABS, and CIBERSORT.
Correlation of CRG Score Signature with Signal Pathways, Tumor Mutation, and Chemosensitivity
To identify the differences in somatic mutations of PC patients between high-and low-CRG-score groups, the mutation annotation format was created with the "maftools" R package [46]. We further examined the dependence of the CRG score, clinical outcome, and TMB. Moreover, to explore diversities in chemotherapy drug efficacy between the two subgroups, we estimated the half maximal inhibitory concentration values (IC 50 ) of chemotherapy drugs for each patient using the "pRRophetic" package [47], which is based on drug sensitivity data from the Genomics of Drug Sensitivity in Cancer dataset (https://www.cancerrxgene.org/ (accessed on 1 January2022)). A ridge regression model fitted the standardized expression data using predictor genes and the drug sensitivity (IC 50 ) values as the outcome variables [48].
Additional Bioinformatics and Statistical Analyses
We applied the Wilcoxon test to analyze the inter-group differences, conducted Spearman analysis for correlation tests, and performed the log-rank and Kaplan-Meier tests to draw survival curves. The R 3.6.3 software (Bell Laboratories, New York, NY, USA) and its corresponding packages were used to process, analyze, and present the data. By comparing different groups, p < 0.05 was considered to indicate statistical significance.
Conclusions
In this study, we systematically analyzed the genomic backgrounds and expression levels of CRGs and inferred their latent role in PC patients' prognosis and tumor microenvironment. We also constructed a novel CAF-associated gene signature as a robust biomarker to predict the prognosis, chemotherapeutic drug sensitivity, and immunotherapy impacts in PC. These results reveal the vital clinical significance of CRGs and put forward new ideas about the molecular classification of PC, which may be applied to precision medicine. | 7,107.8 | 2022-12-21T00:00:00.000 | [
"Biology"
] |
THE RELATIVE INVESTMENT PERFORMANCE OF THE COMMUNITY GROWTH FUND
This paper examines the impact of social criteria on the investment performance of the Community Growth Fund, a trade-union-controlled South African unit trust. It gives a brief history of the fund, discusses reasons for performance deviations, and shows that there may be reason for believing that some social criteria improve performance.
1.1
The explicit use of ethical or social criteria in the choice of institutional investments grew considerably in the last two decades of the twentieth century.A major question raised by the growth in number and size of institutions adopting such criteria has been the effects on investment performance.
1.2
In South Africa, the Community Growth Fund (CGF) was the first institution to explicitly adopt social criteria.The balance of this paper is divided into four sections.Section 2 gives an outline of the history and structure of the CGF.The third section looks at the reasons why share prices move both absolutely and relative to each other.The fourth then briefly discusses socially responsible investment.The final part looks at the relationship between the CGF social criteria and investment performance.
1.3
The conclusion is that, if anything, and contrary to some prior opinions, the CGF social criteria have had a positive impact on investment performance.
HISTORY AND STRUCTURE
2.1 UNIONS AND PROVIDENT FUNDS 2.1.1The creation of the CGF was rooted in two uniquely South African conditions.
2.2.2
The CGF is operated by a management company, which was initially jointly controlled by SMA and Unity Incorporated, a non-profit company set up by the seven sponsoring unions.The management company subcontracted its functions to SMA and its affiliates.The author became the independent acting chairman of the board of the management company and served as chairman until 2002.
2.2.3The seven trade unions came from both the large trade union groups.Affiliated to the Congress of South African Trade Unions (COSATU) were the National Union of Mineworkers (by far the largest investor in the fund), the Construction and Allied Workers' Union, the Transport and General Workers' Union, and the Paper, Printing, Wood and Allied Workers' Union.The others were from the National Council of Trade Unions (NACTU): the National Union of Food, Wine and Spirits and Allied Workers, the Transport and Allied Workers' Union, and the Metal and Electrical Worker's Union of South Africa.
2.2.4The memorandum of association of Unity set out Unity's main business as: "To advance the concept of socially responsible investment and to promote the influence of the Trade Union Movement in the economy through: establishing and holding shares in Unit Trust Management Companies; -establishing financial vehicles, whether on its own or jointly with other organisations, for personal, Trade Union and Retirement Fund investments …"
INVESTMENT PROCESS 2.
3.1 There were initially 16 social criteria, weighted as shown in Table 1.2.3.2SMA nominated those companies that they wished to include in the investment universe.The LRS and Unity then evaluated these companies.The process involved an initial questionnaire to be completed by the company, and then interviews with management and union organisers.It proved difficult at times to get co-operation from companies and some unions not affiliated to the CGF.The LRS then prepared a report that was submitted to the Unity board of directors for approval.
2.3.3Once a company was approved, SMA was able to buy shares for the CGF.The Unity board was keen to publish the results as quickly as possible, but SMA persuaded them to keep the names of approved companies confidential until they appeared as investments in the published reports of the CGF.SMA had been chosen for its exceptional investment performance in the years preceding the launch and its management was concerned that publishing the names of the companies in which they intended to invest, would push up share prices before SMA had been able to acquire an adequate holding.The relatively low liquidity of the South African share market provides reason for these concerns.
2.3.4At the meeting of the board of the management company in July 1992, one month after the launch of the CGF, it was reported that 9 companies, out of SMA's list of 51, had been approved, 7 had been rejected, and 11 had been referred back to the LRS for further investigation.The slow rate of approval, arising partly from the limited resources available and partly from the need to obtain consensus from the volunteer directors of Unity, has been a constant, but inevitable, source of frustration to asset managers.
2.4
INITIAL RESPONSES 2.4.1 The first investor in the CGF was Cyril Ramaphosa, then secretary general of the African National Congress, who made out his personal cheque at the initial press conference.This was followed the next day by a R1 million investment from the pension fund of the business-(rather than labour-) orientated Times Media Limited.Its managing director, Stephen Mulholland identified its launch as a "significant departure in the affairs of a trade union".An LRS spokesman was "delighted that Times Media has acknowledged the superior potential of socially responsible investment", but hastily pointed out the possibility that the company's industrial relations practices would disqualify it from appearing on the unions' list of approved shares 2 .
2.4.2Greenblo 3 identified the tensions.Existing participation in provident funds meant that the unions had implicitly accepted the principle of investment in shares and participation in the capitalist system, even though this flew in the face of a COSATU economic policy document released earlier in 1992.The Unity unions' involvement in the CGF made it more explicit.Greenblo expressed the hope that the CGF approach would break down the "adversarial divide between employer and employee".
2.4.3Other criticisms arose from a fear that the social criteria would reduce investment performance for the reasons set out in section 4 below.
2.5
SUBSEQUENT DEVELOPMENTS 2.5.1 As it transpired, the CGF was the forerunner of much greater union involvement in the capitalist system.While a number of COSATU and NACTU trade unions refused to endorse the fund4 , unions that initially resisted the CGF's apparent compromises subsequently created union investment companies that actively pursued commercial objectives.
2.5.2Inflows of R100 million were predicted for the CGF in the first year.Given that union-controlled provident funds had some influence over perhaps R20 billion of assets, this did not appear particularly ambitious, but by the end of 1993 the fund had only R70 million.Steady growth, usually below expectations, has been achieved, the fund reaching R900 million by 1998 and currently approaching twice that size.
2.5.3A number of reasons for the disappointing growth can be identified.The initial opposition of some unions, inadequate management resources and numerous changes to personnel were perhaps the main ones.More important to the subject of this paper, was the question whether socially responsible investment would reduce investment returns.
NEW INVESTMENT CRITERIA
In the 1996 report to unitholders, "tougher new criteria for companies seeking inclusion in the share portfolio" were introduced.Eight categories, each given an equal weight in the final score, were to be used.
-The creation of jobs through innovation and expansion
The questionnaire used allocates a maximum of 20 points for this category, 5 are given for growth in jobs, another 9 for expansion plans, 2 for retrenchment procedures and 4 for not creating jobs overseas.-Training of workers to enhance skills 2 out of the 9 points allocated to this score are related to training expenditure as a percentage of payroll; the rest are qualitative measures, depending to a significant degree on collaboration with the trade union.-Economic and social empowerment 20 out of 30 points here relate to bargaining with unions, 5 to donations to relevant social projects, and 5 to partnerships with emerging black businesses.-Equity through affirmative action within the workplace Half the points here are allocated on whether the company scores better than the LRS average for its industry.The rest of the points are qualitative assessments.-Good conditions of employment 6 of the 15 points for this category relate to wages, the rest to benefits.-Sound environmental practices must be promoted This is more complicated.Companies not involved in an industry that could have a potential environmental incident automatically get 9 out of 13 points.Companies in other industries can score these 9 if they have suitable policies-losing 1 if they experienced accidents and another if the accident was not catered for in their policies.Plus or minus 3 points are allocated for the desirability or otherwise of the company's products-and 1 for not having dealings with oppressive regimes.
-High health and safety standards must be applied Measures of injuries account for 3 out of 11 points; the rest relate to policies.-Demonstrate open and effective corporate governance The points here are largely allocated in line with the King Report 5 .3 points are given for the company's response to CGF queries and another 4 for disclosure to unions.
OVERALL LEVEL OF PRICES
The next three sections describe the three underlying forces at work in the determination of share prices.There is the demand for and supply of shares that arises independently of their actual value, and then the factors that drive profitability.
3.2 DEMAND 3.2.1 Demand for shares will depend firstly on the forces that lead to people's saving.This is one of the major concerns of economics, but 75 years of intense debate and research has produced very little in the way of mathematical relationships.Smith (1990) reviews research that shows people tend to save more when: -they cannot borrow; -their incomes are rising rapidly; -their income (current and future in the form of a public pension) is insecure; -they have less in the way of accumulated wealth; -they need the money for their own businesses; and -there are some tax advantages in doing so.They may, but this is unclear, also save in order to leave money to their children.
3.2.2People may also shift their savings between cash deposits, real assets-largely property-long-term fixed-interest stock and equities.These shifts will depend partly on inflationary expectations and legislation.
3.2.3People also save, through retirement funds of various types or directly, for retirement.This aspect at least, we can model, as we can project the amounts people will require for their retirement with some accuracy.Some of the impetus for the higher share prices of the last decade appears to come from this source (Poterba, 2004).
The supply of equities will come from new issues of equity, which in turn will depend on matters such as technology, economic growth, labour scarcity, tax and politics (especially privatisation or nationalisation) and the internal politics of family businesses.Changes to these factors are largely unpredictable.Sales may also occur for reasons such as death duties and retirement expenses, which can be modelled.
3.3.2
Even if we could model the factors that underlie the demand and supply curves, they appear to be greatly influenced by changing perceptions, and it is unlikely that we should find any stable relationship to convert them into prices.
3.4
UNDERLYING PROFITABILITY 3.4.1The other major influence on share prices will be the underlying profitability of the companies that have issued the shares.This in turn will also depend, but in different ways, on technology, economic growth, international trade, legislation, taxation, the relative scarcity of capital and the quality of management.All these relationships are unstable.
3.4.2This is however not to say that there is no measure of determining a reasonable long-term value for a share.The net asset value (NAV) is not commonly used to price shares, but is useful in providing two (albeit approximate) views of a company's worth.Firstly, if a competitor were to start up from scratch, it would seem that it would have to invest in similar equipment and make similar profits and losses to get to where the company now is.If so, it would have the same net asset value.Of course, inflation distorts this figure, and there are intangible items that enhance a company's future earning capacity: particularly intellectual property such as patents and reputation.On the other hand, newcomers to an industry can avoid the mistakes of the incumbent and focus on its more profitable markets.The point is that more newcomers will be encouraged into an industry as the ratio of price to NAV increases.This ultimately brings share prices down towards NAV.
3.4.3Secondly, the NAV also provides an estimate of the break-up value of the company, or of its value if it were run down.Again, many adjustments would need to be made.This second argument is of importance when share prices are too low and asset strippers can buy up the shares and make a profit by selling off the assets.
3.4.4In the economics literature, Tobin's Q-the ratio of share price to NAV-has been used either as a measure of the size of the share bubble6 , as well as a measure of the company's success in achieving a competitive advantage, or as a measure of lack of competition7 .
3.4.5An alternative measure of the underlying value of a share is to discount future dividends at a suitable interest rate.The total return to an investor is given by the dividends and the capital growth.If dividend yields remain constant or fluctuate in a narrow band, the total return is entirely dependent on the dividends, as the capital growth is equal to the rate of dividend growth.Actuarial models built by Wilkie (1995) for other countries, and by Thomson (1996) for South Africa, use mean-reverting dividend yields to model investment returns.
3.4.6The discounting of dividend yields becomes a less reliable estimator of a share's value if companies do not pay out a consistent proportion of the profits as dividends.It appears that one of the Miller Modigliani (1961) propositions, which suggests that investors would be happy to see management keep the dividends if the resultant increase in share price is greater than the value of the dividends, is relatively widely accepted, and has led around the world to a fall in dividend yields.This has been exacerbated in South Africa since secondary tax on companies has penalised companies for paying dividends.
3.4.7The difficulties with these methods lead to the most popular measure of a share's value, which is the ratio of market price to reported earnings per share.It is normally used as a simple rule of thumb to determine value relative to the whole market.A more rigorous approach would require the earnings to be discounted, as with dividends.Allowance again has to be made for inflation, and the need to retain profits to fund growth.
FUNDAMENTALS AND PERCEPTIONS
The movement of the share price of an individual company, relative to the average, depends primarily on the underlying profitability of the company relative to others, rather than on overall supply and demand factors.In the short run, perceptions of its relative future are important, as are temporary pressures of demand and supply.For example, it is common to speak of an "overhang" of shares such as followed the demutualisation of the largest South African life assurers, which depressed their price for some time.The relative underlying profitability depends on a number of factors.
3.5.2SECTOR 3.5.2.1 Profitability depends in many ways on the industry in which a company competes.It is probable that the demand for the products produced by companies operating in the same industry will be correlated, whether changes arise from demographics, technology or politics.In the same way, the supply of goods offered by competitors affects all companies within a sector.Similarly, companies within a sector are likely to find their costs of production are related, whether because of changes in technology or the costs of inputs.
3.5.2.2 Sectors can be measured, as most stock exchanges allocate companies to appropriate sectors, but classifications change over time.Data on the rest of the factors mentioned below are only obtainable from the company with some difficulty.
FINANCIAL STRUCTURE
While companies in the same sector may tend to have a similar financial structure, factors other than industry may apply.Growing or family-controlled businesses, for instance, may use greater debt.Different managements may also take a different view on future interest rates, or differently hedge against price movements in their revenues or expenses.The most important of the implications of different financial structures are different exposures to interest-rate, currency and commodity fluctuations.
MARKETS
Even within the same industry, companies develop different niche markets, where one may have a more appropriate product or a stronger distribution, or be better known.Thus, companies may be affected differently by demographic and economic changes-especially, if they operate in different currency areas, by changes in exchange rates.
PRODUCTION
Companies within the same industry may also use different technologies.This may lead, for instance, to greater exposure to changes in wage levels or to the development of different patents.
STAFF AND SYSTEMS
In spite of the importance of the financial and material assets of a company, or its market strength, it is widely recognised that profits may well depend more on the skills, abilities and motivation of the management and staff, and on formal and informal institutional arrangements within the company.
3.5.7 PERCEPTIONS 3.5.7.1 It is clear from the above that each of these headings hides a multitude of other factors, and that it is impossible to accurately model the future relative profitability of any company.The share price also therefore depends on the perceptions of those stock-market participants who might consider buying or selling it.
3.5.7.2The perceptions of an individual are influenced by the perceptions of others.One readily observable result is the herding of investors into observable fashions.Two often mentioned are the alternating attractions of large and small companies and of growth and value shares.
3.5.7.3 Differences in perception are perhaps indistinguishable from differences in the skill of investors or their managers in understanding likely changes in share prices.The returns earned by different investors will differ randomly; it may be that more skilled investors can expect to earn a higher return.
3.6
EFFICIENT MARKET HYPOTHESIS 3.6.1 Readers may be struck by the difference between this discussion and the common assumption that share markets are efficient: market participants price all assets (and liabilities) so that the expected risk-adjusted rates of return are equal.There are some differences as to how the adjustment for risk is made.Many, such as Malkiel (2005), argue-in his case from that the fact that professional investment managers do not outperform the market averages-that there is good evidence of market efficiency in the USA over the past twenty years or more.
3.6.2While it may be enormously difficult to outperform the market after fees, few would today argue that investment markets are always and everywhere efficient: recent research has found numerous pricing anomalies.These inexplicable returns arise from such varying characteristics as market-to-book ratios and calendar months.Section 4 of Cantor and Sefton (2002) provides an overview of changes in the economic literature over the past two decades.In particular, one might expect markets with fewer participants, such as the JSE Securities Exchange, to be less efficient.
3.6.3If markets are always efficient, participants will quickly adjust prices to reflect expected risk-adjusted returns, and investors using social criteria will obtain the same return as everyone else.The question of relative investment performance only has meaning if markets are not perfectly efficient in adjusting for social criteria.Unsurprisingly therefore, Sauer (1997) finds, on the grounds that two socially responsible indices have not underperformed indices representing the whole market over a nine-year period, that socially responsible investors will not have to 'sacrifice investment performance'.This may not, however, always be true.
SOCIALLY RESPONSIBLE INVESTMENT AND SHARE PRICE
4.1 SCREENING 4.1.1In the light of the above, this section looks at the effects that social criteria might have on share prices.
4.1.2Alperson (1991) traces socially responsible investment back to the 1920s with funds with a church connection prohibiting investments in 'sin stocks', mainly related to companies producing liquor or tobacco or involved in gambling.This screening approach has been extended to armament manufacturers, serious polluters, companies found offensive for a range of causes and a Sharia (Muslim) avoidance of companies charging interest.This is the basis for the JSE Securities Exchange's social responsibility index introduced in 2004 8 .4.1.3Another approach to screening is based on companies' corporate governance structure.Although the underlying reason for such screening may be economic, the connection does not detract from the ethical principles underlying good governance.A positive relationship has been found in a number of studies.One of the most extensive can be found in Gill ( 2001), who examines and ranks 495 companies in 25 emerging markets and shows strong correlations between corporate governance and share-price performance.He finds that performance, in terms of both share price and underlying profitability measures, is related to his index of corporate governance and social criteria.
4.1.4The screening approach affects the demand for a particular company's share, and may alter the return.Temporary and permanent effects can be noted: -If the number of investors applying the screening is increasing, demand for an offensive share will decline as may its price, and vice versa.This is clearly a temporary phenomenon.-If large numbers of investors permanently avoid a particular company, then its price may well be permanently depressed relative to its earnings.If the company survives, it will necessarily provide higher returns.Investors that ignore the moral stigma attached to such companies may be said to be beneficiaries of moral arbitrage.The reverse effect might conceivably apply to a company regarded as a moral exemplar.4.1.5The CGF criteria do act as a screening device for shares.At first blush, therefore, it would appear that the only possible long-term impact of the screening per se would be to reduce the performance of its investments.For the screening to increase performance, it would have to include factors that positively correlated with improved performance, and the rest of the market would have to remain in ignorance of these factors.
4.2
ACTIVE SHAREHOLDING 4.2.1An alternative approach, referred to by Leeman (unpublished) as the 'overlay approach', is for shareholders to actively engage the companies in which they invest in order to influence management practices.The most prominent exponent of this approach is the Californian public pension system (Calpers), which has been prominent as a vocal shareholder.Although its portfolios are indexed, it takes an active interest in companies it feels are poorly managed, and Calpers (1995) reports significant improvements in management, and dramatic increases in share prices, as a result.
4.2.2The CGF also engages the managements of each company in which it invests.This arises initially with the collection of data from management and union officials, which is often followed up by negotiations between Unity and the company.Although it has not been consistent, Unity also has a programme of encouraging union members to attend annual general meetings with CGF proxies, and of asking questions.The interaction with the company is, unlike the Calpers interaction, intended to improve the company's social performance rather than its profits.It may, however, have other side effects.
4.2.3These interactions may achieve improved performance of management and staff, as mentioned in ¶3.5.6 above.The Calpers approach may also provide motivation to management to give better returns to shareholders.They may therefore create better returns by avoiding the mistakes and losses that can be caused by bad management.
4.2.4The CGF approach is more indirect, and the results may well not always lead to better returns.Managers may, for instance, be reluctant to create jobs for a variety of reasons not related to profitability.One reason might be that the managers fail to distinguish between maximising the rate of return on capital and maximising shareholder returns.Chew (1998: 165-88) reports a panel discussion in which ten senior business people and academics make these assertions.Pressure to create jobs may then lead to profitable new ventures or reductions in their required rates of return.Whether the latter will lead to an increase or reduction in profit depends, however, on the company's cost of capital.
4.2.5The impacts of all the CGF's social criteria are ambivalent in this way.They may well promote good management practices that will lead to higher profitability and reduce the risk that the company will be subject to labour unrest and liability claims.On the other hand, they could be taken beyond the optimum level and lead to expenditure that does not provide a return to shareholders.4.2.6It is highly unlikely that the same effects will be observable over many years.Even if it were possible for good managers to achieve a balance between the social criteria and returns, the proportion of good and bad management in the market is bound to change, so changing relative performance.
4.3
PROFITS AND RENT 4.3.1One can distinguish between legitimate profits earned by shareholders for providing capital at some risk to themselves, and monopoly rents extracted from other stakeholders because of a company's stronger market position.In Asher (1998), it is suggested that the latter is morally reprehensible.
4.3.2The impact of rents on share-price movements would be similar to the points raised in section 4.1.4above: -An increase in rents would see a temporary increase in share prices, and vice versa; and to the extent that the market permanently saw rents as reprehensible, or risky, moral arbitrage could lead to higher profits from monopolist firms.
4.4 FIDUCIARY DUTIES 4.4.1 Given that the CGF is intended as a vehicle for retirement fund investment, the management would have to conform to the fiduciary duties expected of fund trustees.A conservative view was set out in the Megarry judgement 9 .The British Coal Board as employer had prevented the trustees of their pension fund from using non-investment criteria in asset selection.In this case, the criteria were relatively eccentric.Apart from not wanting to invest in any company investing outside the UK, they also wanted to avoid energy sources competing with coal.Yaron (unpublished) also points out that the union-appointed chairman of the trustees personally argued the case, so depriving the judge of more nuanced legal debate.The judgement stated that trustees are obliged: "to do the best they can for the beneficiaries … and … take advantage of the full range of investments authorized, rather than narrowing that range, and … [are] required to consider the need for diversification of the trust investments.
"… Accordingly, trustees of a pension fund could not refuse for social or political reasons to make a particular investment if to make that investment would be more beneficial to the beneficiaries of the fund."4.4.2Yaron (unpublished) summarises the objections to this narrow view, and suggests various defences of socially desirable investment, even if it does reduce investment returns: -The retirement fund rules may permit the use of social criteria.
-To the extent that the additional profits arise from moral arbitrage, it would seem to be within the rights of the trustees to refuse such profits if their view is shared by the majority of beneficiaries.
-A more common view of trust law is that trustees should aim for a reasonable rate of return; it is not necessary to aim at the maximisation of risk-adjusted returns.4.4.3To the extent that social criteria do not reduce returns, and within these constraints if they do, there is plenty of room to apply social criteria.
4.5
TRUTH, GOOD GOVERNANCE AND CREATIVITY 4.5.1 Most of the reasoning above led to the expectation that socially desirable investment is likely to underperform, if anything, over the long term.If permanent sources of outperformance are to be found, one would have to have to look for entrenched behaviour by the managers of those companies that fail the social criteria.One such possibility is that the single-minded pursuit of shareholder profit so alienates their other stakeholders as to reduce their ultimate profitability.
4.5.2The issue can be seen as one of justice, the virtue that governs the use of power: in this case that of shareholders over other stakeholders.As Lucas (1980) puts it: "Justice is the bond of society … We have been pursuing the wrong political goals-productivity, efficiency, equality-and have neglected the cardinal political virtue of justice, which, together with liberty, is the condition under which I and every man can identify with society, feel at one with it, and accept its rulings as my own."4.5.3It might be put that the human rewards of working for a company that adequately cared for all its stakeholders would unleash creativity that overflowed into greater profits for shareholders.Cynically, this could be phrased that the fulfilling of social criteria enabled shareholders to better motivate staff.
4.5.4In either case, the boot is on the other foot.It is not those trustees that adopt social criteria that fail in their fiduciary duty, but those who place their funds' returns at risk.
MEASUREMENT
5.1 EARLIER STUDIES 5.1.1From the above, it is clear that there are many influences on share prices, and that the effects of social criteria are at best going to be small, and may well be transitory.The measurement of relative performance may, however, offer some insight into investment markets, company profitability and the importance of corporate governance.From a fiduciary perspective, it is important to have some confidence that performance was not significantly affected adversely; from a marketing perspective useful to know if performance was positively affected.Two informal studies examined relative performance in the mid-nineties.
5.1.2Abdulla (1993) studied the relative performance of the 21 shares approved and the 10 shares rejected by Unity, against various benchmarks over the fiveyear period ended 31 December 1992.He found that the approved share outperformed the rejected shares by an average of 8% annually over the period.First Bowring Consulting and Actuarial Services 10 reported that the standard deviation of the average returns of 17 asset managers over the five years to December 1991 was 2,5% p.a..Although the periods are not identical, an average of 8% would appear to be statistically significant.
5.1.3Asher (unpublished) considered a similar time period a little more closely by distinguishing between shares in different sectors.This was to remove the influence of sector on share price (cf.¶3.5.2 above).The difference in the percentage increase in share price during 1993 between accepted and rejected shares by sector is shown in Table 2. 5.1.4Again, the results can be argued to be statistically significant, as there is only one chance in 16 of outperforming in all four sectors.
5.2
PRESENT STUDY 5.2.1 This study considers the relative performance of the data of 54 companies for which evaluations on the new (post-1996) social criteria could be found in the Unity Corporation offices.To these were added 7 additional companies that had been rejected 11 although reports were not available.
5.2.2The relative performance was calculated as the total return on each share less the average return for its sector.Share and sector prices and dividend information were obtained for the years 1998 to 2001 inclusive.The relative performance in each year was calculated, and then averaged for each share.5.2.3Only 38 companies had been listed for the whole four-year period.For the other companies, the relative performance was averaged over the period over which they had been listed.There were 45 data points for calendar 1998, 48 for 1999 and 55 from the last two years.It is not thought that this distorts the results significantly.
5.2.4The pattern of the earlier investigations is shown in Table 3, and is confirmed.
5.2.5 The differences are impressive, but not statistically significant.There is a probability of almost 30% that these differences are the result of random or other unobserved affects.
FURTHER ANALYSIS
The correlation matrix for the 54 companies with social criteria data was calculated, and is shown in Appendix A. The most significant was the positive relationship between job creation and conditions of employment.This relationship did not appear to be related to sector: financial companies in particular appeared in both extremes.It seemed to have no significance for performance.
5.4
EMPOWERMENT 5.4.1 Empowerment had a positive correlation with all the other social variables; it was statistically significant in 4 of the 7 cases.This is hardly surprising, given that the questions in this category are heavily weighted to relationships with the unions, as are questions in each of the other categories.
5.4.2Only 5 of the 30 points in this section relate to the form of black empowerment popular from 1994 to 1998, when individual black people were first given soft finance to take significant shareholdings in relatively small companies.Shares in these companies experienced something of a bubble, which had not entirely passed at the beginning of the period of this investigation.The impression of the author, when he attended meetings of the Unity board, was that the evaluation of black-empowerment companies was initially less critical than those of other companies.
5.4.3In order to test this possibility, the five approved black-empowerment companies (defined by their shareholders) were separately examined.Their average relative return over the period was very low (-14%) given the ending of the bubble.This was not statistically significant (there was a 17% chance of error), but sufficiently large to consider excluding them from the sample.When this was done the outperformance of the A-rated companies increased from 6% to 9%.The probability that this outperformance is random reduces to 13%.
5.5
REGRESSION ON ALL DATA It was assumed that relationships, if they existed, would be more or less linear.A linear regression using the relative returns as the dependent variable, and the social scores as the independent variables was therefore performed.The procedure used was to see the effect of using all the variables, and then progressively eliminating the least significant variables until the remaining variables were significant at least at the 10% level.The results using all the data proved fruitless: no variable was ever found statistically significant even at this level of significance.
5.6
REGRESSION EXCLUDING BLACK EMPOWERMENT COMPANIES The analysis was repeated excluding the black-empowerment companies.The results did achieve a significant result for affirmative action.Appendix B shows details of the regression.The points allocated for affirmative action were largely from objective comparisons of the level of black people and women in higher levels of management.This is information not widely available, so that outperformance from this source should not relate much to perceptions but to real underlying causes.Two possibilities are that high scores on this dimension relate to companies with good personnel practice, and that the promotion of blacks and women led to greater profitability for the company because their skills had previously been unrecognised.
5.7
REGRESSION EXCLUDING OTHER OUTLIERS 5.7.1 The plot of the residuals in Appendix B shows that the model fails to predict outperformance of up to 100% (1 on the vertical axis).(Residuals refer to results-outperformance in this case-that the regression model fails to explain.)The reasons are relatively easy to explain when one examines the data, which are set out in Appendix D. There are a few companies that have very large outperformance-of up to 103%.The reasons for this outperformance are not likely to relate to the social criteria.A particular problem with methods of linear regression is that extreme outliers of this nature can completely change the relationships between variables.5.7.2Two further progressive regressions were therefore performed.The first restricted the outperformance to a maximum of 50%, and a minimum of -30%.It produced very similar results to those of Appendix B. Corporate governance, however, came close to statistical significance as positively correlated with increasing performance.
5.7.3The second approach was to exclude these extreme companies altogether.The results are shown in Appendix C. Unexpectedly, but in a reversal not unknown with regressions, affirmative action is no longer significant, but corporate governance and job creation become significant.5.7.4 Job creation has a negative correlation with performance.Given the large number of job losses within the South African economy over the nineties, this suggests that companies that have responded most quickly to the technological changes at the root of these changes have been more profitable.If so, given the enormous social costs of the losses, it is to be hoped that this situation will come to a speedy end.
6.1
The social criteria adopted by the CGF have, if anything, enhanced its investment performance over the years.It appears that three of the criteria have been important.Affirmative action, which relates mainly to the number of blacks and women in senior positions, and good corporate governance were seen to have a positive effect.Job creation, under which heading is included general business expansion in South Africa as opposed to internationally, was found to have a negative impact.
6.2
The discussion of the process of share-price determination in this paper suggests that, with the possible exception of corporate governance, which could have an ongoing effect on company profitability, social criteria are likely to have small and transitory affects on investment performance.
Table 1 .
Initial social criteria
Table 2 .
Outperformance of approved shares in 1993
Table A1 .
Correlations between variables
Table B1 .
Summary output
Table C1 .
Summary output | 8,409.8 | 2005-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
Tailoring Non-Compact Spin Chains
We study three-point correlation functions of local operators in planar $\mathcal{N}=4$ SYM at weak coupling using integrability. We consider correlation functions involving two scalar BPS operators and an operator with spin, in the so called SL(2) sector. At tree level we derive the corresponding structure constant for any such operator. We also conjecture its one loop correction. To check our proposals we analyze the conformal partial wave decomposition of known four-point correlation functions of BPS operators. In perturbation theory, we extract from this decomposition sums of structure constants involving all primaries of a given spin and twist. On the other hand, in our integrable setup these sum rules are computed by summing over all solutions to the Bethe equations. A perfect match is found between the two approaches.
Introduction and Main Result
In this paper we study three-point correlation functions of local operators in planar N = 4 SYM at tree level and at one loop using the underlying integrable structure of the theory [1]. We will study the structure constant C ••• governing the correlation functions involving two scalar BPS operators and a spin S operator in the so called SL(2) sector as depicted in figure 1. Our work generalizes some of the results in [2,3] from the compact SU (2) case to the non-compact SL(2) setup. We start by describing our setup and presenting our main result (12).
We consider the correlation function (2) where bar stands for complex conjugation. 1 Each protected operators is taken to be in a SU (2) sector and is therefore parametrized by two integers that indicate how many complex scalars it is made of, see figure 1.
The operator O (2) BP S (x) is given by a similar expression with l → L − l and with the complex scalarX replaced by its conjugate X.
The non-BPS spin S operator is more interesting and its form is governed by a non-trivial wave function O S (x) = 1≤n 1 ≤n 2 ≤···≤n S ≤L ψ(n 1 , . . . , n S ) O n 1 ,...,n S (x) (4) where O n 1 ,...,n S (x) stands for an operator with L scalars and derivatives at positions n 1 , n 2 , etc. We also include some conventional 1/m! numerical coefficients if m derivatives act on the same scalar field. That is, where D + is a covariant derivative in a light-like direction and m j stands for the number of derivatives acting on the j-th scalar Z. For example O 1,2,4 = Tr((D + Z)(D + Z)Z(D + Z)ZZ . . . ) and O 2,2,4 = 1 2! Tr(Z(D 2 + Z)Z(D + Z)ZZ . . . ) etc. For a generic twist L and spin S, there are several possible primary operators (4) corresponding to different possible wave functions ψ. These wave functions are found by requiring that the states (4) diagonalize the quantum corrected N = 4 dilatation operator. 1 To be more precise, this normalization condition does not fix the structure constant completely since we can always multiply any of the three operators by a phase. This does not affect the two point functions (2) but changes the phase of C ••• in (1). Hence, by itself the structure constant is not a physical quantity but its absolute value is. In this paper we always use the freedom of tuning the phase of the external operators to set the structure constant in (1) to be real. 2 We assume that N ≥ 1 but the exact value of N will be irrelevant for the most part. For example, the dependence of N is trivial and factorizes in our main result (12). The reason why we do not consider the N = 0 case is that in that case the length of the non-BPS operator would be equal to the sum of the lengths of the two BPS operators. In other words, there would be no propagators on the top of figure 1 and this correlator would be extremal. For extremal correlators we also need to take into account the operator mixing of the large operator with double traces which is an annoying complication. On the other hand, this same mixing is suppressed at large N c if the correlator is not extremal. This is why we want at least a small bridge on top of figure 1, i.e. N ≥ 1. This same reasoning led to the SU (2) setup of [2] for scalar operators.
At tree level we have a large degeneracy of several primary operators with the same classical dimension L + S. As we turn on the coupling, these dimensions acquire quantum corrections and this degeneracy is lifted. Since the degeneracy is lifted already at one loop, the one loop eigenstates are enough to parametrize the states at any order in perturbation theory. Both the leading order eigenstates and their first loop corrections are described in detail in Section 3.
For now their precise form is not important. It suffices to know that each primary operator is parametrized by a set of real numbers {u 1 , . . . , u S } called Bethe roots. These Bethe roots are constrained by a set of so called Bethe ansatz equations that arise once we impose periodicity for the wave function in (4) and that take the form [4,5,6,7] In this expression the momentum p(u) and the S-matrix S(u, v) are best parametrized using the so called Zhukovsky variables x(u) given by where the coupling In terms of these, The BES dressing phase σ 2 (u, v) [8] is irrelevant and can be set to 1 throughout this paper since it first deviates from 1 at four loops which is way beyond the scope of this work.
The different solutions to the Bethe equations (6) are in one to one correspondence with the different possible primaries (4). The quantum corrected dimension ∆ of the operator O S -which appears in the exponents in (1) and (2) -is simply given by It is not hard to count the number of solutions to (6) for a given spin S and twist L.
For example, for twist L = 2 there is a single solution to (6) for even S and no solution for odd spin. This means that for twist 2 there is actually no degeneracy at all. This makes the study of these operators and their correlation functions considerably simpler and, indeed, these are the operators that are studied in greater depth in the literature.
The structure constant C ••• in (1) depends explicitly on the integers N , L, l, S, on the coupling g and on the set of Bethe roots {u 1 , . . . , u S } solving (6). The purpose of this paper is to study this quantity at tree level and one loop in the planar limit. We derived this quantity at tree level and proposed a conjecture for its value at one loop. In total, our main result reads The result (12) is, not surprisingly, strikingly similar to the analogue SU (2) result -see equation (24) in [3] and Appendix E for a detailed comparison. The contribution B depends uniquely on the non-BPS operator and is given by a simple determinant, The most interesting contribution is A l which depends explicitly on the three-point function setup through the integer l, see figure 1. This contribution can be written as a sum over all possible ways of splitting the Bethe roots {u j } into two partitions, Finally, in this expression, the function f(u, v) reads The main difference to the SU(2) result -reviewed in appendix E -is the form of this This concludes the discussion of our main result for the structure constants in (1).
It would be very interesting to generalize this computation to more general correlation functions, where more than one operator has spin, see e.g. [11] and [12] for interesting works in this direction. It would also be instructive to study interesting limits of our conjecture (12) such as large spin limits, in the spirit of [13] and [14]. Other very interesting limits to play with would be those where the integer spin is analytically continued to complex values and taken to extreme values such as S → −1 where all loop constraints [15] might guide us figuring out the next quantum corrections to (12). Our work, when combined with [2,3] and [16], provides valuable hints about the structure of correlation functions of generic local operators in planar N = 4 SYM theory. It would be very interesting to combine all these results into a single description of very general correlators.
We present some non-trivial checks of (12) against available perturbative data in section 2. In section 3 we present the derivation/motivation of our conjecture. We also present some speculative remarks in that section. Sections 2 and 3 can be read independently of each other. Additional details are presented in the appendices.
Conjecture versus Data
In this section we compare our prediction (12) with the results in the literature obtained by direct perturbative computations. We find perfect agreement with all the available data.
Twist 2 operators
We start by studying the simplest possible case in figure 1 where the non-BPS operator has minimal twist L = 2 (and l = 1). As is well known, for even spin there is a single primary operator of the form (4) with twist two and for odd spin there is no primary operator at all with this twist. Indeed, there is a single solution to the Bethe equations (6) for L = 2 and S even and there are no solutions for L = 2 and S odd. Furthermore, for L = 2, the Bethe roots u j = u (0) j + g 2 u (1) j + . . . are also particularly simple to find. To leading order, they are given by the zeros of a Hahn polynomial [17], With Mathematica, the roots of the Hahn polynomials for each spin S can be computed with arbitrary precision. Once the leading order position of the Bethe roots is found, the quantum corrections u (1) j are computed by linearizing the Bethe equations around this solution. This can again be done with arbitrarily high precision. Finally, the Bethe roots are plugged into (12). It turns out that the final result for the (square of the) structure constant (12) up to one loop can be expressed in terms of rational numbers. For example, for the first few spins we find, and so on. These are the integrability based predictions for the correlator in figure 1 for L = 2, l = 1 and generic N (in this simple case the N dependence cancels out).
At tree level this structure constant was first computed by Dolan and Osborn in [18] by analyzing the operator product expansion of four BPS operators. Since there is a single primary with twist L = 2 and two units of R-charge its contribution to the OPE is particularly simple to single out. This analysis was generalized to one loop in [19]. The results of [18] and [19] read which perfectly agree with our predictions (17). Recently, this same three-point function was also computed directly in perturbation theory in [20].
Recently, the computation of four point functions of BPS operators in the so called 20 representation was revived due the discovery of a hidden permutation symmetry [21] which completely determines the correlation function integrand up to remarkably high loop orders [22]. Eden managed to OPE decompose these results thus predicting the value of the structure constant in (18) up to three loops for arbitrary spin S [23]. Any proposal for the higher loop corrections to the integrability result (12) ought to reproduce this formidable amount of data. Hopefully this can be used as a powerful guiding principle when looking for such corrections.
Higher twist operators
We shall now repeat the previous analysis for operators with larger twist. One important difference is that for larger twist L, the spin S is not enough to uniquely specify the primary operator. Instead, there are several primary operators with the same spin and these are in one to one correspondence with the several solutions to the Bethe equations (6). For each primary operator we can predict the corresponding structure constant in (1) by simply plugging the corresponding solution to the Bethe equations {u 1 , . . . , u S } into (12). For example, for L = 4 and spin S = 4 there are five solutions to (6). One such solution is given by (It is trivial to arbitrarily increase the precision of these roots as needed.) We can now plug these Bethe roots into (12) to obtain structure constant in (1) for the corresponding primary operator.
We should also specify N and l which parametrize the BPS operators, see figure 1. For L = 4 the most symmetric setup is the one where the BPS operator is evenly split in two, that is for l = 2. The N dependence is not so interesting since it factors out in (12) but we do want it to be positive, see footnote 2. Hence, for concreteness let us consider the simplest possible case corresponding to N = 1 and l = 2.
Then, we find, for the Bethe roots (19), the following prediction: We would now like to check this result against a direct field theory computation.
More precisely, the tree level result in (20) does not need to be checked since it is derived in section 3.2 and hence it is definitely correct. However we would like to check the one loop correction which is a conjecture. Unfortunately, there is no direct perturbative computation of structure constants of higher twist operators. The work [20] considered twist 2 operators only. It would be very interesting to generalize [20] to arbitrary twist and check our predictions such as (20) for arbitrary twist L and spin S primaries.
In the meantime, to check out conjectures, we consider a small detour. We will see that while we can not match the values of individual structure constants such as (20) we can easily check particular sums of structure constants involving all possible primaries of a given spin and twist.
Sum Rules and the OPE Decomposition
One way of obtaining structure constants which does not rely on computing directly three point functions in perturbation theory is by analyzing four point correlation functions. In this approach one decomposes these objects in conformal partial waves and reads off dimensions and structure constants of the operators flowing in this decomposition [18]. In perturbation theory we expand around the point g = 0 where there is a huge degeneracy. Hence, the conformal partial wave decomposition -in its most obvious form -yields particular sums of structure constants for primary operators with the same twist and spin. This approach is particularly powerful because from a single four-point function we can extract infinitely many such sums for infinitely many operators that flow in the OPE.
With this motivation in mind, we now split our discussion into two parts. We start with a discussion on the computation of such sums from the integrability point of view. Afterwards we match these against explicit OPE decompositions of four-point functions available in the literature.
The sums P (n,m) S that will arise in the OPE can be concisely described with the generating 4 We added a subscript u to the anomalous dimension (10) and to the structure constant (12) to emphasize that they depend on the particular solution to the Bethe equations. In (21), y is simply a bookkeeping parameter used to define the generating function. Since the anomalous dimensions γ u = O(g 2 ), once expanded in perturbation theory, the y-dependence of the right hand side of (21) matches that in the left hand side such that the sums P (6); one of them yields the structure constant (20). We now find all five solutions, plug them in (10) and (12) and add them up as in (21). We find a nice surprise. We obtain five sols which is a clear rational number (we adjusted the precision of the several terms to highlight the periodic nature of the digits characteristic of rational numbers). That is, we get five sols In the same way we can predict several more sums for different values of S (keeping the same twist L = 4 and also N = 1 and l = 2 as above). For example, for n ≤ 3 we find Table 1.
All the very non-trivial looking rational numbers in table 1 can be matched against perturbative data by OPE decomposing appropriate four-point functions as we now explain.
Consider the correlation function of four BPS operators
where z andz are the usual cross-ratios which behave as z,z → 0 in the OPE limit x 1 → x 2 which we will be considering. The OPE expansion is a double expansion in z andz where powers of z are the conformal spins (dimension plus spin) of the exchanged operators whereas powers ofz measure their twists (dimension minus spin), see [18] for more details.
Now, the OPE of Tr(ZZX)(x 1 ) and Tr(ZZX)(x 2 ) produces operators with 4 units of R-charge in the Z direction. Hence, the operators with the smallest possible twist in this OPE are of the form of (4) with twist L = 4. Therefore, the contribution of the leading twist operators to G(z,z) is completely governed by the sums P (n,m) S introduced above. This contribution is given by the leading terms asz → 0 as we now review. More precisely we have, At order n in perturbation theory f (z, τ ) is a polynomial in τ ≡ 1 2 log(zz) of degree n given by where the functions f This is easily derived from thez → 0 expansion of the standard conformal blocks, see e.g. [18]. These functions admit a regular expansion at small z, The larger S is, the more suppressed f (m) S are in the OPE limit.
We now discuss the comparison between (27) and (28). First, and most importantly, we note that the terms that are captured by both expressions perfectly match! These are the two leading powers of τ up to two loops -shown in blue in the first three lines of (27) and (28). The remaining terms in these expressions -coloured in magenta -are also very interesting as we now discuss.
From the terms in magenta in the perturbative computation (28) we can now read off the sums P (2,0) S , see table 2. All these numbers should be matched against any candidate for the next quantum correction to (12). For example, one could try to cook up an educated guess for the two loop correction to (12) and use this data to constrain the potential of this guess. We played a bit with these ideas but we were not imaginative enough to come up with the right ansatz thus far. The terms in magenta in the integrability expression (27) are also interesting. They provide predictions to the next loop corrections (3 loops and higher) to the four-point correlation function (22) which might be useful in constraining or simplifying the perturbative computation of this object.
We repeated this comparative analysis for several other cases. In appendix C, for example, we present the analogue results for the sum rules P (n,m) S for twist L = 3 (and l = 1) and also for twist L = 6 (and l = 2).
Finally, we should point out that, we could also consider slightly more general sums involving a product of two different structure constants. That is, we could construct which would govern the OPE behaviour of a less symmetric correlation function (compared to (22) where all external operators have the same size). For example, from the OPE analysis of a correlation function such as we would be able to read off the sums (29) where C ••• u would correspond to (12) with L = 4, N = 1 and l = 1 while C ••• u would be given also by (12) but for L = 4, N = 1 and l = 2.
In table 5, in appendix C, we present the integrability predictions for the sum rules (29) for this case. From these sums, following the same kind of analysis as above, we could predict thatG(z,z) =z 2f (z, τ ) + O(z 3 ) wheref (z, τ ) is given in (68) in appendix C. 6 Chicherin and Sokatchev kindly shared with us their unpublished two loop result for (a generalization of) the four-point correlation function (30) [25]. We OPE expanded their proposal finding perfect agreement with our predictions forf (z, τ ). From their two loop results one can also extract the first two loop data for the asymmetric sum rules (29).
We should emphasize again that all these detours -involving constructing and checking sum rules for structure constants rather than the structure constants themselves -steam from the absence of any perturbative results for three-point functions involving operators of generic twist. It would be very interesting to develop further the perturbative side of this story and directly check our predictions such as (20) against a direct perturbative computation of the three point function (1).
Alternatively -and given that these sum rules seem to be simpler than the individual terms in the sum -it would be interesting to develop an integrability based approach for computing the sums directly.
3 Derivation (tree level) and educated guess (1 loop) In this section we explain how (12) arises from an integrability based approach. At tree level, we derive this result; at one loop it is an educated guess whose motivation we will present.
The (ultra-local) wavefunctions for the operator O S
One important ingredient in the three-point function (1) is the form of the non-BPS operator (4). Since we are interested in the one loop structure constants we need the non-BPS operators at O(g 2 ). These operators diagonalize the two loop planar dilatation operator of N = 4 SYM.
Strictly speaking, the form of an operator is not a very physical quantity since we can always change it by performing field redefinitions. In other words, we can always apply similarity transformations to the dilatation operator. Still, we can adopt a particular scheme. Physical quantities such as the structure constants and the operator dimensions will not depend of that choice.
In the literature there are two representations of the dilatation operator, related by a similarity transformation. One was worked out by Eden and Staudacher in [26] and the other by Zwiebel in [27]. Each has its own advantages and drawbacks. The representation of [26] is very simple and explicit however it was worked out only for operators with spin S = 1, 2 or 3. The representation of [27] can be applied for operators of any spin however it is considerably harder to manipulate. 7 We checked explicitly that for S = 1, 2 or 3 the two Hamiltonians are related by a similarity transformation. We will mostly use [26]; some comments on the comparison with [27] are presented in appendix D.2.
For spin S = 1, 2 and 3 the eigenvectors of the dilatation operator of [26] take the form (4) with the wave functions ψ(n 1 ) = φ 1 (31) ψ(n 1 , n 2 ) = φ 12 + S 12 φ 21 ψ(n 1 , n 2 , n 3 ) = φ 123 + S 12 φ 213 + S 23 φ 132 + S 23 S 13 φ 312 + S 12 S 13 φ 231 + S 12 S 13 S 23 φ 321 7 In the representation [27] the Hamiltonian is written in terms of a bilinear of supercharges Q. The full Hamiltonian acts inside the SL(2) sub-sector (4) as expected but each individual supercharge does not. Hence, to deal with this representation, in intermediate states we need to consider more general operators, with fermions and so on. For large spins, this makes the use of this representation very cumbersome, even using a computer.
We now explain in detail the meaning of these symbols. First note that we can also change the overall normalization of the wave functions such that the two-point functions satisfy (2). The S-matrices S ab = S(u a , u b ) in (31) appeared already in the introduction. As mentioned there, the Bethe rapidities u a are a particularly nice parametrization of the momenta p a = p(u a ) which are quantized according to the Bethe equations (6). Finally, we have the plane waves φ a 1 ...a S = e ipa 1 n 1 +···+ipa S n S 1 + g 2 δ a 1 ...a S where δ a 1 ...a S are the so called contact terms or fudge factors introduced in [5]. They are zero if the particles are well separated. Let us postpone their discussion for now.
Clearly, these wave functions have a transparent physical meaning. They describe a set of particles which scatter among themselves in a factorized way. Their form is typical of integrable models with local interactions and is said to be of Bethe ansatz form. The Bethe equations (6) are nothing but the periodicity condition for these wave functions. The generalization to more particles is then straightforward: we simply add up S! plane waves decorated by the appropriate products of S-matrices.
It is also clear from this physical picture that a Bethe ansatz might need to be slightly improved if the interactions have some finite range. When the particles are within the interaction range the wave functions should be corrected. In N = 4 SYM the planar dilatation operator can be thought of as a local Hamiltonian whose range increases by one unit at each order in perturbation theory. The contact terms δ a 1 ...a S precisely take into account the finite range nature of the interactions and correct the wave function for nearby particles. For a single particle δ a = 0. For two particles the most generic contact term that we might expect to encounter at two loop order would take the form for n 2 − n 1 = 0 .
Again, so far, everything we wrote is very generic and would be roughly the same for any integrable model with such finite (but short) range interactions. This formalism was dubbed as asymptotic Bethe ansatz in [28,5].
We will now review a very special feature of this particular SL(2) spin chain which we dub as ultra-local nature of the contact terms. It turns out that we only have non-zero contact terms at one loop when the particles are right on top of each other. There is no contact terms when they are next to each other, C •• (p a , p b ) = 0 (35) and That is, up to three particles we actually only need two contact terms: C• • (p a , p b ) and , p b , p c ). This was also pointed out in [26]. The precise form of the two relevant contact terms is written in the appendix D.1, see (69) and (70).
At this point it is very natural to assume that this ultra-local simplification generalizes in the most obvious way to four and more particles as well. This is the most important outcome of all this discussion and we will make use of this later when arguing for (12). Unfortunately, since the Hamiltonian in [26] was only written up to S = 3 it is not possible to straightforwardly check this conjecture for a few more cases with S = 4, 5 etc.
Finally, we found that the norm of the states (31) are given by the typical Gaudin's norm. This was checked at tree level already in [2] but here we checked it for the quantum corrected states (31) as well. We found that In the structure constant this factor should appear in the denominator since we should normalize the non-BPS operator O S as in (2). This explains the factor 1/B in (12). In the next sections we discuss the factor A l , the most interesting part of the full result.
Tree-level derivation
At tree level we can know everything about all the three states in (1) and all we need to do is Wick contract them to compute the structure constant (12). We now sketch the derivation, following [2] closely (see in particular section 3.1 in [2]).
To evaluate (1) where N is a simple factor due to the normalization of each of the three operators; for example, it contains the Gaudin norm (37) discussed in the previous section. 9 This expression can be dramatically simplified as we now explain. First note that for each value of M we get a space dependence proportional to 10 Furthermore, the sum in the parentheses is nothing but the scalar product between an SL(2) off-shell Bethe state and a vacuum descendent. Those scalar products can be computed straightforwardly by changing a few signs in the SU (2) result -see appendix A.2 of [2]and yield the factor A l in (12). This concludes the sketch of the tree level derivation.
One-loop conjecture
In the previous section we derived (12) at tree level. At one loop the structure constant receives g 2 corrections due to two different effects.
• On the one hand the wave functions in (4) get corrected. This includes corrections to the S-matrices of the excitations as well as the inclusion of the contact terms discussed in section 3.1. Unfortunately we do not have a solid description of the quantum corrected states for S ≥ 4.
• On the other hand, at one loop, we need to add loops to the tree Wick contractions discussed in the previous section. This second effect can be taken into account by inserting a splitting operator acting on the legs in figure 1 at the splitting points as described in [30,31]. Such operators are well understood for three-point functions involving scalars [30,31]. Unfortunately, for operators involving derivatives they are not known in full generality. It should be possible to generalize the results of [20,31] to eliminate this gap.
Given our ignorance about either type of corrections we have to resort to some guesswork to motivate (12).
Our main assumption is that the form of the result is a minor deformation of the tree level result. That is we assume that where and In this ansatz S(u, v) is the loop corrected S-matrix which is known. So the only unfixed ingredients in our guess are the quantum corrections to the prefactor and to the function It is very important to point out that the ansatz (41) is not a random guess. On the contrary, in the computation of a similar structure constant -but with the non-BPS operator made out of scalars (instead of derivatives) -the quantum corrected result took exactly this form [3]. This is the main motivation for (41).
In the SU (2) scalar case the quantum corrected structure constants were actually derived rigorously in [3] and, in particular, the outcome of this computation yielded a particularly simple result for the prefactor, namely As our very first guess we assume that our simple prefactor in (4) takes the same form.
Next we turn to the quantum corrections to the function f(u, v) in (44). This function can be constrained by two simple requirements.
• First we impose that our result should be invariant under l ↔ L − l since this is an obvious reflection symmetry of our setup, see figure 1. In other words, we should have A L−l = A l .
• Next we impose that A 0 = 0. When l = 0 the three point function of one non-BPS primary operator with two BPS operators formally reduces to a two point function between a non-BPS operator and a BPS operator. The latter should clearly vanish hence we impose that A 0 = 0.
This assumption looks very innocent but it is actually a bit trickier than it sounds; it does not hold for the SU (2) case mentioned above, for example, while naively the exact same logic would lead to this same conclusion. For now let us ignore this subtlety; we will come back to this point in the next subsection.
To analyze the consequence of the previous two requirements it is enough to consider the case with two excitations. According to the previous two points we should have A 0 (u 1 , u 2 ) = A L (u 1 , u 2 ) = 0 so that In the second line we can get rid of L by using the Bethe equations e −ip 1 L = S(u 1 , u 2 ) and e −ip 2 L = S(u 2 , u 1 ) = 1/S(u 1 , u 2 ). Combining both equations we find the remarkably simple result f(u 1 , u 2 ) = 2 1 + S(u 2 , u 1 ) which expanded in perturbation theory leads to (15). Nicely, (48) automatically leads to A L−l (u 1 , . . . , u S ) = A l (u 1 , . . . , u S ) for any l and for any S.
This concludes our motivation of the conjecture (12). Given that many points lack a solid derivation it is very important to check this prediction against perturbation theory to provide solid evidence for it. This was the purpose of section 2.
Further Comments and Speculative Remarks
As anticipated above, the assumption A 0 = 0 is not as innocent as it might sound since the l → 0 limit might be singular. Indeed, this same requirement should naively also apply to the SU (2) setup studied in [2] however in that case [3] That is, for the SU (2) case the relation (48) only holds at tree-level while in SL(2) we claim that it holds at least up to one loop.
One possible explanation is the following. When splitting an SL(2) state into two subchains we get (a sum of) two decoupled Bethe states on each subchain. This holds because of the ultra-local nature of the contact terms reviewed in section 3.1. This is why the l → 0 limit of the structure constant is very non-singular and our argument above for A 0 = 0 should hold for our SL(2) setup. In contradistinction, the SU (2) contact terms, which appear at order g 2 , are not ultra local [3]. Therefore, when we cut an SU (2) chain, the states on each of the two resulting subchains know about each other. As such, the l → 0 limit of the SU (2) structure constant is potentially more singular at one loop level. This is probably why we do not have the right to impose the A 0 = 0 condition for the SU (2) case beyond tree level.
Incidentally there seems to be a nice connection between the ultra-locality of the SL(2) states and some manifestation of dual conformal symmetry of the eigenstates [32,33]. Hopefully, this approach will clarify this unusual ultra-locality. Furthermore, if this ultra-locality were preserved at higher loops we could probably conjecture the next quantum corrections to (12) by following the logic outlined in the previous subsection.
This concludes the discussion of the subtle requirement A 0 = 0. We would now like to discuss the other main requirement, namely the condition A L−l = A l . This relation should be quite robust since it only relies on an obvious reflection symmetry of our SL(2) three point functions, see figure 1. It is bound to work as well for the SU (2) case, see figure 2. This condition is not enough to fix the function f completely however it does imply that This relation is indeed satisfied both in the SU (2) and in the SL(2) cases at tree level and at one loop. If the all loop expression for the structure constant is given by a deformation of (41), where A l is still given by a sum over partitions of Bethe roots as in (43), then (49) should hold to all loops. Of course, this is a big if.
Nonetheless, the relation (49) resembles some sort of Watson equation for form factors [34]. Perhaps this is more than a coincidence. After all, as recently advocated [35], it is natural to expect form factors to play an important role in the study of three-point functions. Can f be given some nice finite coupling definition in terms of form factors of the BMN string? If so, we might hope to bootstrap it exactly.
Relations of the form (49) recently played a central role in a very different context, namely in the computation of null polygonal Wilson loops in planar N = 4 SYM theory [36]. There, the so called fundamental relation is a functional equation for the so called pentagon transitions P which reads [36] where S(u, v) is the S-matrix for the fundamental excitations on top of the GKP state. One might wildly speculate whether the pentagon transitions P (u|v) and our function f(u, v) are not so different after all. Can it be that they are similarly defined objects but naturally defined on top of different vacua? (namely the BMN string [37] for f(u, v) and the GKP string [38] for P (u|v)) It would be very interesting to study the next loop correction to (41) and confirm that it is still given by some simple deformation of (43) and (42). This would definitely give very strong support to these speculative ideas and strongly motivate us to push them further.
Along these lines, it would be interesting to investigate whether the strong coupling results [39] can be written as (the classical continuum limit of) some strong coupling deformation of (43) or of its scalar counterpart [13,3].
Finally, it would be fascinating to develop further the (algebraic) integrabity description of the eigenstates of the dilatation operator at higher loops. For operators with derivatives we have roughly no control over the operators beyond one loop. With an algebraic descriptioǹ a la [3,40,41,42] we would be able to make substantially more rigorous progress, with considerably less guesswork.
A More details on the twist 4 analysis
In [19,24] the general four-point function for the following scalar operators is given explicitly up to two loops, where p is the twist of the BPS operators and j = 1, . . . , 4. The polarization vector t j is null and the index a n = 1, . . . , 6 in the usual SO(6) R-charge index. To make contact with our correlation function (22) we choose p = 3 and take the polarization vectors to be the following: and likewise for the other external operators in (22). Therefore the polarized four-point function (22) is obtained by In this way we read off G(z,z) from [19,24]. Its OPE expansion is given in (28).
B Explicit expressions for the sums P (n,m) S In perturbation theory we have such that, from (21) we find (the sums are sums over all solutions to Bethe equations for a given L and S) etcetera.
C Other examples: Twist 3, Twist 6 and a special Twist 4 In this appendix, we present some more sum rules and their relation to the conformal partial wave expansion of the corresponding four-point functions.
Someone bold could be tempted to conjecture that C• . . .
while for excitations which are more widely separated there is no need for any change of basis. In the Zwiebel's basis the wave functions still take the form (31) but now all of the contact terms in (33) and (34) are present. In other words, the nice ultra-local nature of the contact terms alluded to in section 3.1 is lost. For example, C Z • • (p 1 , p 2 ) = sin 2 p 1 2 + sin 2 p 2 2 − 1 2 sin 2 p 1 + p 2 2 + 1 2 . (72) The three particle contact terms can also be trivially obtained by following the change of basis written above but they are quite messy and unilluminating to be written down.
E Comparison with SU (2)
Here we review the value of the SU (2) structure constant -depicted in figure 2 -at tree level and one loop [3]. We massage slightly the result in that paper by conjugating it and then choosing the phase of the non-BPS operator so that the structure constant is real (for real roots). We have where with the SU (2) S-matrix where The reader might be puzzled as the S-matrix S SU(2) (u, v) and the function f SU(2) (u, v) in this appendix are the complex conjugate of the expressions reported in [3]. However, we should note that the ip(u j )L factors in (74) and (76) also appear with an opposite sign compared with [3]. That is, our expressions are simply the complex conjugate of those in [3]. Since the final result is real this complex conjugation is not an issue. We chose to perform this conjugation to highlight the similarities between the SU (2) and the SL(2) results in the conventions (for the S-matrix) used in this paper. | 9,718.8 | 2013-11-25T00:00:00.000 | [
"Physics"
] |
Peculiarities of Gamification in Foreign Language Teaching in the Context of Digitalization of Education
The focus of the present study is the concept of gamification in educational process. The relevance of the problem at issue in the article is due to a tendency to the digitalization of education and the lack of empirical research on the implementation gamification in foreign language teaching. The goal of this study was to confirm that foreign language teaching with the use of gamification technology is more effective than traditional ways of teaching and teaching with the use of classical gaming technologies. This article examines the findings of a theoretical analysis of scientific literature on the concept of gamification, its elements and players’ types. The paper describes the difference between gamification and such concepts as play, game, simulation, serious games. An empirical research is based on the pedagogical experiment, the results show that the use of gamification in foreign language teaching makes it possible to improve the language competencies of the students. Meanwhile, it is noted that the implementation of gamification in the educational process requires a complex approach: this requires understanding of the students’ player types, developing of game application, controlling of gamification implementation into the traditional forms of teaching at all the stages.
Introduction
The modern world is becoming more and more digital. New technologies and innovations are being implemented in virtually all spheres of human activity. It is impossible to ignore the processes of computerization and digitalization in society, as well as to ignore the use of digital technologies in It will contain educational content, information solutions and provide integration with relevant regional information resources.
Transfer of education to digital format has directed educational organizations to change the education process too. One of the most relevant areas of educational technologies development is gamification. In the scientific literature researchers define gamification as the usage of game methods in non-game context (Varenina, 2014;Goryachkina, Kudryavtseva & Balina, 2016). However, the usage of game elements is not the innovation for educational organizations. In Russian pedagogy the issue of gaming activities was developed by Ushinsky (1968), Elkonin (2005), Rubinshtein (1989). They investigated game peculiarities in the educational process and upbringing. Gaming technology is highly effective, universal and suitable for all educational purposes method (Igna, 2011). In High School even before the occurrence of the phenomenon gamification game techniques and methods were widely used in teaching.
This leads us to think that the phenomenon of games is not relevant for pedagogical researches. However, the growth of Generation Z leads us to take a new look at the function of games in the educational activity.
Generation Z is the term used in the world for human generations born approximately in 2000-2019 years.
It corresponds to a theory of generation invented by Neil Howe and William Strauss (2000).
Representatives of this generation actively use tablets, gadgets, smart watches. This is the first digital generation, and its representatives are called «Digital Natives» (Prensky, 2010). No doubt that the appearance of digital generation has an impact on the aims, context and tools of pedagogical process.
Along with changes in pedagogical process methods of teaching are changing too. So, the use of game elements as the one of the methods in foreign language teaching also has changed. Today it is considered as gamification. In spite of the fact that gamification can also represent traditional game mechanics, nevertheless it is considered as a part of digital education.
The concept of gamification is widely examined in the writings of Russian and foreign researches (Lobacheva, 2018;Nechunaev, 2018;Stepashkina, 2017;Herger, 2014;Werbach, 2020). So, Herger (2014) makes a clear distinction between gamification and play, game, simulation and serious games. In spite of similarity of these concepts they are different. According to Herger (2014), there are neither rules, nor goals in play. The main thing is activity itself. Games on the contrary have both rules and goals. For example, they help to solve a definite problem, but games don't have a direct real-world outcome too.
Serious games are games created for a primary goal other than fun and entertainment. They possess all game elements, but their aim is to achieve something that is predetermined. Serious games have serious background and purposes, for example training pilots. Simulations look like serious games; however, they simulate real life. Its objective is user training in an environment similar to real life. According to Merriam-Webster's Dictionary simulation is defined as something that is made to behave like something else so it can be used to train people (Staff, 2004). Gamification is the usage of game elements and game mechanics in non-game contexts, its goal is to engage users and solve particular problems (Herger, 2014). Comparing all definitions, he created this matrix (Table 1). The comparative parameters are the following: spontaneity, the existence of rules and goals, inner structure, real world/ gaming space and system. Table 1.
Play
Game Serious Simulation Gamification Spontaneous The table shows that gamification crosses "game" in such positions as existence of rules, goals and the game's structure. However, the game shifts the player location to the gaming space, whereas the gamification leaves us in the real world, it means that the participant remains himself, takes on no roles and moves only from his own motivation and inner goal, for instance to learn English language. Gamified educational process should be structured because teaching program needs to be divided into certain stages, and every stage should have own goal, working for the general purpose of the course. Simulation is more similar to gamification. It creates an illusion of reality on the computer environment and serves the training purposes. In comparison with the simulation gamification creates an illusion of the game and uses the mechanics of the computer environment in the real world. With regard to the system parameter, gamified educational process doesn't mean the usage of separate game elements but the whole process of gamified maintenance of educational activity. Gamification fully accompanies educational coursefrom goals and purposes setting to final control of knowledge.
In implementing gamification into the foreign language teaching it is necessary to know what player types the students are. The game researcher Richard Bartle (2004) divided player personalities into four types: killers, achievers, socializers and explorers. Killers are players whose main goal is to win over the other players, they want to fight with the others. They are really competitive. The main purpose for achievers is to earn points, levels and virtual goods. Prestige of having these items is a motivation for achieves, even if these rewards don't help them to achieve game success. Explorers like discovering the virtual world. They try to find every hidden corner or feature in the virtual world. For socializers game itself is not so important, they play games to communicate with other people and users. Each of students (and all people in general) contains these four players' characteristics. Some characteristics predominate, but they can change with the situation. Awareness and understanding of students' player types will help to implement gamification into the foreign language teaching more effectively, that, in turn, will lead to a better knowledge acquisition.
According to Werbach (Werbach, 2020), gamification contains game elements, game design techniques and non-game context. There are three parts to this definition.
Game elements can be divided into game mechanics and game dynamics. The game mechanics are the foundation for gamification in education. They are used to gamify any process or content. There are following game mechanics: scores, levels, badges and trophies, leaderboards, tasks, avatars and individual profile.
Scores, points are among the prime motivator, collecting them make the student feel rewarded. Collected scores and points unlock new levels for the students. There is no punishment for failure, only rewards for the efforts.
Levels are the common mechanics in the games. Starting from the simple tasks and slowly increasing the level of difficulty. The increase in difficulty as well as progressing to new level gives students a development, and thus improving their competence. The examples of language learning platforms that use levels are DuoLingo, Lingualeo. As the student progresses, he unlocks new levels.
Badges and trophies are symbols of student's status. They can be awarded to the student for his achievements.
As for the game dynamics, they can drive the student motivation and need to avoid making the game monotonous. They are Achievements, Competition, Progress, Collaboration, and Surprise. Competition is one of the prime motivations that make students to move forward and do their best.
Game elements are investigated and described in the writings of such researchers as Deterding et al.
As for game design techniques, it is considered to be the visual experience of the game.
Non-game context is everything where objective is behind the game, where the purpose is other than playing the game.
Gamification brings game technics to the teaching system without turning it into a game. Its elements can easily be implemented into the educational process that will certainly help to significantly increase students' motivation to learning.
Purpose and objectives of the study
The goal of this study is to confirm that foreign language teaching with the use of gamification technology is more effective than traditional ways of teaching and teaching with the use of classical gaming technologies.
Methodology
The research based on the theoretical and empirical methods: analysis of studies on gamification in education; Bartle's test (2004); observation, pilot modeling of pedagogical activity with the help of gamification.
Experimental research base
Experimental research was carried out in the Kazan (Volga region) Federal University on the basis of the Institute of Management, Economics and Finance. The total number of participants in the pedagogical experiment was 86 students (25 students in the experimental group, 30 students in the control group 1, and 31 students in the control group 2).
The study included four stages. At the first stage the analysis of the theoretical and methodological approaches was carried out, the objective and methods were singled out, the plan of present research was made.
At the second stage, educational process was organized in three variants: in the first control group (CG 1) traditional methods and teaching tools were used. In the second control group (CG 2) the lesson was carried out with the use of traditional game tools. In the experimental group (EG) the gamification was implemented into the educational process.
The main teaching methods in the first control group were explaining of new materials, work on a textbook, discussions, doing exercises. In the second control group traditional teaching methods were combined with the game techniques, such as widespread games Hangman, Hot seat, Mime game and others.
Before implementing gamification into the educational process of the experimental group we ran R. Bartle's test (2004) in advance to determine what type of the player every student is. Game applications were selected and implemented into the educational process in accordance with the type of the player.
At the third stage, we examined learning skills of the students, for this purpose written test was developed.
At the fourth stage of our research, received statistical data and practical results were analyzed and summarized.
Results
At the diagnostic stage of the research R. Bartle's test (2004) was run on the students of the experimental group. The test revealed the following percentage of players types (Figure 1).
Figure 1
Types of the players among the students in the experimental group 30 13 47 10 socializers explorers achievers killers The figure 1 shows that the «achievers» are prevailed in the experimental group. Achievers are all about points and status. They like collecting badges and put them on display. They want to show their friends how they are progressing. We selected the game application taking into account the most relevant and attractive factors for the achievers. It is a popular educational online platform Lingualeo, which is offering English language learning service based on gamification methods. Lingualeo has levels, points, virtual goods which are so important for the achievers. This game application was successfully implemented into the educational process.
At the next stage of pedagogical experiment we examined acquired English language skills of the students.
The results are shown below (Figure 2).
Indicators of students' knowledge level
According to the figure 2 analysis of the data received reveals that both the second control group and the experimental group showed a high level of gained learning skills in comparison with the first control group.
It leads us to conclude that the use of game mechanisms in foreign language teaching can improve students' language competency. The experimental group showed the best results, there gamification was implemented into the educational process.
Furthermore, we decided to analyze such a parameter as residual knowledge that means maintaining of skills and language competency for a long time. For this purpose we again examined gained language skills of the students in one month. On the basis of its results we figured out grade point average of each group, and then we compared it with grade point average of the written test which was conducted immediately after the study (Figure 3).
Results of the test of the residual language skills
The figure 3 shows that the results of retesting slightly declined, however, the highest point belongs to the experimental group. It indicates that gamification contributes to the more effective memorizing of the studied material. To identify more significant differences in residual knowledge it is necessary to examine the students in one year after the experiment.
While the implementing gamification into the educational process of the experimental group and further its use on the English language lessons for a long time we found the following tendency: the use of gamification leads to the growing needs of students for interactive digital methods in foreign language teaching, and similar to computer games we can identify it as "gamification addiction". Therefore, in the absence of gamification students lose interest in language learning, they are demotivated to study.
Thus, despite the complexity of implementing gamification into the educational process (i.e. identifying of players' types, selecting/ developing of game program, integrating the game to the lesson, controlling all stages of gamified lesson), gamified educational process efficiency is higher than traditional teaching methods.
Discussions
Despite the obvious potential of gamification in the educational process, review of the existing theoretical literature showed that today there is a lack of an appropriate teaching strategy and methodologies for implementing gamification into the learning process. Hence, there is a need for further study of the problem of implementing gamification in education, i.e. didactic, methodological and psychological aspects of its implementing into the foreign language teaching should be developed. The issue of selecting/ developing of game program to pursue definite pedagogical objective is very important too; while there are many teaching digital game applications and programs, there is no unified classification of them.
Conclusion
The process of digitalization influences and will continue to influence the system of professional education in high school. Under the impact of new technologies the educational process is increasingly transforming.
As a new method of foreign language teaching, gamification has a great pedagogical potential. Its role in the educational discourse, in particular with regard to the developing students' language competency, is beyond doubt. Based on the results of this research, we can say that implementing the gamification into the foreign language teaching leads to achieving certain learning objectives and as a result to improving students' language competency. However, it requires a complex and comprehensive approach: it is necessary to select/ develop a specific game program, to identify players' types, to control all stages of implementing gamification into the traditional educational process. | 3,627.4 | 2020-11-25T00:00:00.000 | [
"Education",
"Computer Science"
] |
Emotion detection from handwriting and drawing samples using an attention-based transformer model
Emotion detection (ED) involves the identification and understanding of an individual’s emotional state through various cues such as facial expressions, voice tones, physiological changes, and behavioral patterns. In this context, behavioral analysis is employed to observe actions and behaviors for emotional interpretation. This work specifically employs behavioral metrics like drawing and handwriting to determine a person’s emotional state, recognizing these actions as physical functions integrating motor and cognitive processes. The study proposes an attention-based transformer model as an innovative approach to identify emotions from handwriting and drawing samples, thereby advancing the capabilities of ED into the domains of fine motor skills and artistic expression. The initial data obtained provides a set of points that correspond to the handwriting or drawing strokes. Each stroke point is subsequently delivered to the attention-based transformer model, which embeds it into a high-dimensional vector space. The model builds a prediction about the emotional state of the person who generated the sample by integrating the most important components and patterns in the input sequence using self-attentional processes. The proposed approach possesses a distinct advantage in its enhanced capacity to capture long-range correlations compared to conventional recurrent neural networks (RNN). This characteristic makes it particularly well-suited for the precise identification of emotions from samples of handwriting and drawings, signifying a notable advancement in the field of emotion detection. The proposed method produced cutting-edge outcomes of 92.64% on the benchmark dataset known as EMOTHAW (Emotion Recognition via Handwriting and Drawing).
INTRODUCTION
Emotion detection (ED) is the process of recognizing and evaluating the emotional states and feelings of individuals using a variety of techniques.Accurately understanding and interpreting human emotions is the ultimate objective of ED, which has a variety of applications in areas including mental health, user experience, education, marketing, and security (Acheampong, Wenyu & Nunoo-Mensah, 2020).Emotions are one's reactions, and they can differ widely among individuals.Defining universal patterns or guidelines for detection might be challenging since individuals may show the same emotion in various ways.Although emotion science has made considerable strides, there is still much to learn about the subtleties and complexity of human emotions.It is still difficult to create comprehensive representations that adequately depict the entire spectrum of emotions (Zad et al., 2021).The development of intelligent systems to assist physicians at the point of treatment uses machine learning (ML) techniques.They can support conventional clinical examinations for the assessment of Parkinson's disease (PD) by detecting its early symptoms and signs.In patients with PD, previously taught motor abilities, including handwriting, are frequently impaired.This makes handwriting a potent identifier for the creation of automated diagnostic systems (Impedovo, Pirlo & Vessio, 2018).DL models have shown promising outcomes in ED, notably those built on RNNs and convolutional neural networks (CNNs).These models can recognize temporal connections and learn complicated patterns from the emotional data (Pranav et al., 2020).
The study conducted by Kedar et al. (2015) analyzed handwriting features such as baseline, slant, pen pressure, dimensions, margin, and boundary to estimate an individual's emotional levels.The study concludes that it will aid in identifying those individuals who are emotionally disturbed or sad and require psychiatric assistance to deal with such unpleasant emotions.Additionally, Gupta et al. (2019) examines electroencephalogram (EEG) signals from the user's brain to determine their emotional state.The study uses a correlation-finding approach for text improvement to change the words that match the observed emotion.The verification of the sentence's accuracy was then carried out utilizing a language modeling framework built on long short-term memory (LSTM) networks.In a dataset with 25 subjects, an accuracy of 74.95% was found when classifying five emotional states employing EEG signals.Based on handwriting kinetics and quantified EEG analysis, a computerized non-invasive, and speedy detection technique for mild cognitive impairment (MCI) was proposed in Chai et al. (2023).They employed a classification model built on a dual-feature fusion created for medical decision-making.They used SVM with RBF kernel as the basic classifier and achieved a high classification rate of 96.3% for the aggregated features.
Existing ED research has mostly concentrated on a small number of fundamental emotions, such as happiness, sadness, and anger.The complexity and variety of emotional expressions make it difficult to adequately depict the entire emotional range.The performance of ED systems is improved by identifying significant characteristics across several modalities and creating suitable representations (Zakraoui et al., 2023).The capacity to recognize emotions via routine activities like writing and drawing could contribute to mental health (Rahman & Halim, 2023).Collecting handwriting and drawing samples has been easier with the rise of human-machine interfaces like tablets (Likforman-Sulem et al., 2017).Therefore, a key component of the connection between human and computer communication is extracting and comprehending emotion (Yang & Qin, 2021).
In contrast to previous research, which mostly relied on text or audio data, this study involves a novel approach to analyze samples of handwriting and drawings to identify emotions.The suggested method employs a digital platform to analyze handwriting or drawing samples of an individual and forecast their emotional state using an attentionbased transformer model.This method can capture more complex and sensitive feelings, which may not be capable of adequately conveyed through spoken words.This work employs an attention-based transformer model, which has proven to be very successful in natural language processing (NLP) tasks.The proposed model can capture long-range relationships and identify significant aspects in the input data, which may be essential in precisely recognizing emotions.The performance of the suggested model is tested on benchmark datasets that outperform the closely related work.The proposed study advances current information by establishing a baseline for evaluating the performance of ED models using handwriting and drawing examples.
Challenges and limitations
It is interesting to improve communication between humans and robots by extracting emotions from handwriting.However, the amount and quality of the training dataset affect the process of creating predictive ML frameworks.There is not sufficient public data available to create precise models for emotion recognition from handwriting and drawing samples.The limited availability of public data poses challenges in constructing accurate models for emotion recognition from handwriting and drawing samples.Additionally, the absence of voice modality and facial expressions further complicates emotion detection in text (Chatterjee et al., 2019).Defining clear and unbiased criteria for different emotional states becomes challenging because individual handwriting styles can vary widely.Emotions are inherently complex, contributing to an overlap between distinct emotional states.For instance, it might be challenging to distinguish between sadness and frustration when they are conveyed in writing or drawing in similar ways.Emotions are frequently contextdependent and impacted by a variety of elements, such as personality, social environment, and cultural background.This makes it challenging to create models that correctly identify emotions in various contexts and circumstances.Despite these difficulties, researchers have been investigating numerous strategies to examine handwriting and determine emotional states.The accuracy of emotion recognition from handwriting can be increased by integrating many modalities, such as examining the content, structure, and dynamics of handwriting with other indications.
Novelty and contribution
This research introduces several distinctive contributions to the proposed approach for emotion recognition from handwriting and drawing samples utilizing an attention-based transformer model.Firstly, it integrates two modalities, handwriting and drawing, a departure from previous studies that often focused on one modality.This integration allows for a more comprehensive analysis of emotions, considering the diverse ways individuals express their feelings through these different mediums.The core contribution lies in the introduction of an attention-based transformer model, finely tuned to capture spatial and temporal correlations inherent in handwriting and drawing samples.Notably, the model excels in capturing long-term dependencies, a feature advantageous in understanding nuanced emotional expressions.Unlike traditional models, the transformer is not order-dependent, enhancing its flexibility in handling non-sequential input sequences such as handwriting and drawing samples.These contributions collectively advance the field of emotion recognition by offering a novel and effective model that considers diverse modalities and addresses inherent challenges in capturing emotional nuances.
A SURVEY OF EXISTING WORK
In recent years, a substantial body of research has been dedicated to exploring the intersection of machine learning and computer vision in the domain of emotion detection (Kanwal, Asghar & Ali, 2022;Azam, Ahmad & Haq, 2021;Asghar et al., 2022;Li & Li, 2023).This has become particularly crucial given the growing significance of understanding and interpreting human emotions.Beyond emotion detection, machine learning has found applications in various domains, including hot topic detection (Khan et al., 2017(Khan et al., , 2023)), anomaly detection (Haq & Lee, 2023), and recognizing anomalous patterns (Haq et al., 2023), especially in social media contexts.Furthermore, machine learning has played a pivotal role in text classification (Ullah et al., 2023;Khan et al., 2022), providing sophisticated approaches for organizing and categorizing textual data and rating software applications.The ability to classify information is vital not only for sentiment analysis but also for broader applications such as content categorization (Zhang et al., 2023).In addition, information fusion (Wang et al., 2021;Zhang et al., 2020), where data from diverse sources are integrated to provide a more comprehensive understanding, has gained prominence.In the pursuit of accurate diagnoses for depression, Esposito et al. (2020) have explored innovative methodologies to obtain more reliable measurements than conventional questionnaires.This involves analyzing behavioral data, including voice, language, and visual cues, with sentiment analysis techniques evolving from linguistic characteristics to sophisticated tools for text, audio, and video recordings.The research study (Aarts, Jiang & Chen, 2020) introduces an application detecting four distinct emotions from social media posts, outlining techniques, outcomes, challenges, and proposed solutions for the project's continuity.The research study (Aarts, Jiang & Chen, 2020) utilizes convolutional neural networks to assess visual attributes in characterizing graphomotor samples of Parkinson's disease (PD) patients.The study achieves an 83% accuracy in early PD prediction using a dataset of 72 subjects through visual information and SVM classification.According to the study (Aouraghe et al., 2020) emotions like stress, worry, and depression have an impact on health, so it is crucial to recognize their emotional symptoms as early as possible.The study conducted in Moetesum et al. (2019) focused on the possibility of using handwriting's visual characteristics to anticipate PD.The structure and operations of some brain areas are impacted by neurodegenerative illnesses, which lead to a gradual deterioration in cognitive, functional, and behavioral abilities.
A research study (Ayzeren, Erbilek & Çelebi, 2019) introduces a distinctive handwriting and signature biometric database with emotional status labels.The investigation delves into predicting emotional states (happy, sad, stress) from online biometric samples, achieving noteworthy success, especially in stress prediction from handwriting.The database, encompassing 134 participants, includes demographic information for comprehensive analysis.Examining preserved graphic structures in handwriting, the study (Rahman & Halim, 2022) explores the correlation between handwriting features and personality traits.Using a graph-based approach, eleven features are extracted to predict personality traits based on the Big Five model.Employing a semi-supervised generative adversarial network (SGAN) for enhanced accuracy, the study achieved a remarkable predictive accuracy of 91.3%.Nolazco-Flores et al. ( 2021) characterizes emotional states related to depression, anxiety, and stress using features from signals captured on a tablet.The EMOTHAW database includes handwriting and drawing tasks categorized into specific emotional states.Selected features improve average accuracy classification (up to 15%) compared to the baseline.The study conducted in Shrivastava, Kumar & Jain (2019) presented a DL framework for the challenge of fine-grained emotion identification utilizing multimodal text data.They unveiled a new corpus that depicts various emotions gleaned from a TV show's transcript.They employed a sequence-based CNN with word embedding as an ED framework to identify emotions.The study conducted in Bhattacharya, Islam & Shahnawaz (2022) provided a unique approach based on the Agglomerative Hierarchical Clustering algorithm, which can recognize the emotional state of the individual by examining the image of the handwritten text.They identify emotions with a maximum accuracy of 75%.There exists a lack of well-established techniques for emotion recognition using handwriting and drawing samples.The precise characteristics or patterns in handwriting that relate to certain emotions are not well discovered.Various experiments are conducted to detect emotion from handwritten text, however, there still exist certain limitations which are explained in the following sub-section.
PROPOSED METHODOLOGY
The proposed study introduces an attention-based transformer model designed to generate a more comprehensive feature map from handwriting and drawing samples.This model aims to accurately identify both handwritten information and emotional content.The model supposes that writing and drawing are impacted by one's emotional state and are connected to behavior.The proposed method involves gathering an individual's writing through an electronic device and analyzing it to determine her emotional state.The suggested model is founded on the transformer architecture, which makes use of attention processes to provide a more detailed feature map of the data.The goal is to enhance the model's capacity to recognize and understand the information and feelings represented in handwriting and drawing.The suggested study offers a thorough assessment of numerous writing and drawing traits and identifies their relationship to emotional states.Using writing and drawing samples, this work identified emotions with a high degree of accuracy.The details of how the algorithm works are explained in Algorithm 1.
Utilizing the Attention-Based Transformer Model for Emotion Detection involves a systematic process.First, the required libraries and frameworks are imported, and the model architecture is defined, considering parameters such as the number of layers (L), dimension of the model (d_model), number of attention heads (h), and dimension of the feed-forward network (dff).Pre-trained weights of the Transformer model are loaded, and the provided handwriting and drawing samples undergo preprocessing, converting them into a suitable input format and storing them for subsequent use.Following this, the preprocessed samples are loaded, converted into tensors, and passed through the Transformer model for inference.The model's output is processed through a linear layer and a softmax function to generate predicted probabilities for each emotion.The final step involves extracting the emotion with the highest probability for each sample, ultimately returning the predicted emotion.This streamlined process ensures effective and nuanced emotion detection from input samples.
Emotion models
The core of ED systems is emotional models used to represent individual feelings.It is crucial to establish the model of emotion to be used before beginning any ED-related activity.
Discrete emotion models
The discrete model of emotions includes categorizing emotions into several groups or subgroups.The Paul Ekman model (Ekman, 1999) classifies emotions into six fundamental categories.According to the idea, there are six essential emotions, which are independent of one another and come from various neurological systems depending on how an individual experiences a scenario.These basic feelings include happiness, sadness, anger, disgust, surprise, and fear.However, the synthesis of these feelings may result in more complicated feelings like pride, desire, regret, shame, and so on.
Dimensional emotion models
The dimensional model requires that emotions be placed in a spatial location because it assumes that emotions are not independent of one another and that there is a relationship between them.The circumplex of affect is a circular, two-dimensional concept presented by Russell (1980) that is significant in dimensional emotion expression.The Arousal and Valence domains of the model separate emotions, with Arousal classifying emotions according to activations and deactivations and Valence classifying emotions according to pleasantness and unpleasantness.This work uses both emotional models.
Experimental design
The implementation of the proposed work is done using Jupyter Notebook with a five-fold cross-validation training strategy.Both the EMOTHAW and SemEval datasets were used during training.The learning rate is set to 0.0001, and the model is trained over 25 epochs.A weight decay of 0.05 is applied to control overfitting, and the Adam optimizer is employed for efficient parameter updates.The loss function utilized for training is cross -Let dff represents the dimension of the feed-forward network.
3. Load pre-trained weights of the Transformer model.
Preprocess samples:
-Let N be the number of samples.
-For i = 1 to N: entropy.Additionally, attention heads are incorporated using a multi-heads mechanism, enhancing the model's ability to capture intricate patterns and dependencies within the data.The model's number of layers, attention heads, the dimensionality of embeddings, learning rate, and batch size are among the hyperparameters that are fine-tuned.Several assessment criteria, including accuracy, F1 score, precision, and recall, are used to assess the model's performance on the test set.The number of samples for each emotion category that were successfully and wrongly identified is used to calculate these measures.
EMOTHAW dataset
The EMOTHAW database (Likforman-Sulem et al., 2017) contains samples from 129 individuals (aged between 21-32) whose emotional states, including anxiety, depression, and stress, were measured using the Depression Anxiety Stress Scales (DASS) assessment.Due to the dearth of publicly accessible labeled data in this field, this database itself is a helpful resource.A total of 58 men and 71 women participated in the dataset.The age range has been constrained to decrease the experiment's inter-subject variation.Seven activities are recorded using a digitizing tablet: drawing pentagons and houses, handwriting words, drawing circles and clocks, and copying a phrase in cursive writing.The writing and drawing activities used to get the measures are well-researched exercises that are also used to assess a person's handwriting and drawing abilities.Records include pen azimuth, altitude, pressure, time stamp, and positions (both on article and in the air).The generated files have the svc file extension, generated through the Wacom device.Figure 1 shows a system overview of the proposed method.
SemEval dataset
The Semantic Evaluations (SemEval) dataset (Rosenthal, Farra & Nakov, 2019) includes news headlines in Arabic and English that were taken from reputable news sources including the BBC, CNN, Google News, and other top newspapers.There are 1,250 total data points in the dataset.The database contains a wealth of emotional information that may be used to extract emotions, and the data is labeled according to the six emotional categories proposed by Ekman (1999) (happiness, sadness, fear, surprise, anger, and disgust).
Feature extraction
Drawing and handwriting signals are examples of time series data, which is displayed as a collection of data points gathered over time.In this study, we extract characteristics from the handwritten and drawing signals in the time domain, frequency domain, and statistical domain.The signal's changing amplitude over time is used to extract time-domain characteristics, which contain the signal's mean and standard deviation.The frequency content of the signal is extracted to get frequency-domain characteristics, in which spectral entropy, spectral density, and spectral centroid are included.The signal's statistical characteristics are used to extract statistical features.Examples consist of mutual information, cross-correlation, and auto-correlation.
Mean
The average value of the signal over a particular time interval is calculated to get an idea about the overall level of the signal.In this work, the mean value of a handwriting signal x n ½ over a time interval of N samples is calculated as: where l is the mean value of the signal, n ranges from 0 to N À 1, and AE denotes the sum of all values.
Standard deviation
The standard deviation of the signal over a particular time interval is calculated to measure the amount of variability in the signal.In this work, the standard deviation of a handwriting signal x n ½ over a time interval of N samples are calculated as: where r represents the standard deviation of the signal, n ranges from 0 to N À 1.
Spectral density
Spectral density is used to measure the power distribution of a handwriting signal in the frequency domain.In this work, the spectral density of a handwriting signal is calculated as: where S f ð Þ represents the spectral density, and F f ð Þ denotes the Fourier transform of the signal.
Spectral entropy
In the proposed work, spectral entropy is used to measure the irregularity of the power spectrum in the handwriting signal for feature extraction.The mathematical formula to calculate spectral entropy is given by: where SE represents the spectral entropy, P i shows the power at the i À th bin of the power spectrum, and N denotes the number of frequency bins.
Autocorrelation
Autocorrelation measures the degree of similarity between the handwriting signal and a delayed version of itself.It can be used to identify patterns or repeating features in the signal.In this work, the autocorrelation of a handwriting signal x½n over a time interval of N samples are calculated as: where R k ½ is the autocorrelation of the signal at lag k, n ranges from 0 to N À 1.
Classification
An attention-based transformer model is used to classify the features of handwriting and drawing signals.The pre-processed data is delivered into the attention-based transformer model during training to decrease the discrepancy between the model's forecasts and the actual labels assigned to the data.In this study, the model is trained by employing strategies like gradient descent and backpropagation.The capacity to pay attention to various aspects of the incoming data is one of the main characteristics of an attention-based transformer model.This is accomplished through the use of an attention mechanism, which enables the model to concentrate on the input's key characteristics at each stage of the classification process.The model can acquire the ability to recognize patterns and traits that are crucial for differentiating between various classes of handwriting and drawing by paying attention to different components of the input.Additionally, this architecture often has several processing levels.Every layer of the model is made to learn more intricate representations of the input data, enabling the model to identify subtler characteristics and patterns.Typically, a softmax function is used to generate a probability distribution across all feasible classes of handwriting and drawing using the output of the last layer.
EXPERIMENTS
The initial stage in the experiments is to process the data from each dataset to identify the appropriate features for emotion identification from handwriting and drawing examples.
An attention-based transformer model for emotion recognition from handwriting and drawing samples is trained and evaluated through a series of experiments.For this experiment, we use the EMOTHAW and SemEval benchmark datasets.These datasets include several types of handwritten and drawn samples as well as the associated emotion labels.An SVC file containing 1,796 points with seven metrics (x location, y location, time stamp, pen status, azimuth, altitude, and pressure) is part of the EMOTHAW dataset.Both points acquired on article (pen status equal to 1) and points acquired in the air (pen status equal to 0) are included.The SemEval dataset includes 4,359 handwritten text samples of various emotions, such as happiness, surprise, joy, anger, fear, and sadness.
Attention-based transformer model
The capacity of the attention-based transformer model to identify long-range relationships in sequential data has made it a popular neural network design in recent years.The model is made up of several self-attention layers that enable it to intelligently focus on various input data components depending on their applicability to the work at hand.The transformer architecture's attention methods are used to direct the model's attention to the areas of the input data that are most important.The performance of the final classification is improved by attention methods, which collect additional factors that are helpful for classification and give higher priority to essential elements of information.The attentionbased transformer model is employed in the classification of handwriting signals to develop an illustration of the input signal that captures the required characteristics for inferring the associated emotion.The model produces a probability distribution across the various emotion categories after receiving as input a series of time-domain, frequencydomain, or statistical data derived from the handwritten signal.Input encoding: In the proposed work, the input sequence of extracted features undergoes an initial encoding through a linear transformation and a subsequent nonlinear activation function to generate a sequence of embeddings.X ¼ x 1 ; x 2 ; . . .; x n f g! E ¼ e 1 ; e 2 ; . . .; e n f g ; where Self-Attention: The proposed work utilizes a self-attention layer to compute a weighted sum of the embeddings, with the weights determined based on the similarity between each pair of embeddings.
where Q, K, and V represent the query, key, and value matrices, respectively, and d k denotes the dimensionality of the key vectors.The softmax function is used to normalize the similarity scores to obtain a probability distribution over the keys.In a broader sense, the attention mechanism is conceptually explained by the hypothetical formula: Attention q; k; v.This involves assessing the similarity (Sim) between queries (qÞ and keys (k), followed by multiplying this similarity score with the weighted sum of values (v).Put simply, it determines the level of attention each query should assign to its corresponding key and combines these attention-weighted values to produce the final output.Typically, Softmax is employed to derive the similarity score.The Softmax-based calculation of the attention mechanism can be expressed by the equation: The Softmax attention mechanism entails matrix multiplication, where the dot product is computed between each feature vector in q and the transpose of k.This dot product is then divided by the scaling factor ffiffiffiffi ffi d k p before being subjected to a softmax function.The Softmax attention mechanism relies on the dot product operation, which considers both the angle and magnitude of vectors when computing similarity.
Multi-head attention: To capture multiple aspects of the input writing signal in this work, the self-attention layer is extended to include multiple heads.
where i represents the head index, and Q i , K i , and V i denote the query, key, and value matrices for the i-th head.Feed-forward networks: In this work, the output of the self-attention layer is passed through a feed-forward network to further refine the representation.
where f represent a non-linear activation function.Here the feed-forward network consists of two linear transformations with a non-linear activation function.
Output prediction: In this work, the final output of the transformer model is obtained by passing the refined representation through a linear transformation and a Softmax function.
where P represents the predicted probability distribution over the emotion categories.
Evaluations metrics
The assessment criteria employed in emotion identification from handwriting and drawing samples employing an attention-based transformer model largely rely on the particular task and dataset being used.The following is an explanation of the assessment metrics utilized in this study.
Accuracy
This measures the proportion of correctly classified emotions to the total number of emotions used in the dataset.
where T p represents the number of true positive samples (samples that were correctly classified as the target emotion), T N denotes the number of true negative samples (samples that were correctly classified as not the target emotion), F p shows the number of false positive samples (samples that were incorrectly classified as the target emotion), and F N describes the number of false negative samples (samples that were incorrectly classified as not the target emotion).
F1 score
The F1 score, which offers a fair evaluation of the model's achievement, is a harmonic mean of accuracy and recall.In contrast to recall, which assesses the proportion of genuine positive predictions made from all positive samples, precision assesses the proportion of true positive predictions made from all positive predictions.
F 1 represents a weighted average of precision and recall, with a maximum value of 1 and a minimum value of 0.
RESULTS AND DISCUSSION
In the first experiment harnessing the rich EMOTHAW dataset, the proposed study employed an attention-based transformer model to meticulously analyze collected features and unravel the intricacies of emotional states.The model demonstrated its prowess on the test set, surpassing conventional machine learning techniques with a peak accuracy of 92.64%.Notably, the integration of both handwriting and drawing traits, as outlined in Table 1, proved to be a game-changer, giving superior accuracy in diagnosing depression and offering nuanced insights into the identification of anxiety and stress.
Using the EMOTHAW dataset, the proposed study used an attention-based transformer model for the collected features to achieve the recognition results for the three emotional states.On the test set, the model outperformed conventional machine learning techniques with the highest accuracy of 92:64%.Using both handwriting and drawing traits together, as shown in Table 1, we were able to diagnose depression with the best accuracy.Similar findings were obtained for the identification of anxiety, with 79:51% accuracy using drawing, 77:38% using writing, and 83:22% integrating both writing and drawing features.For stress detection, we obtained the highest accuracy of 79:41% using writing features.To encapsulate, the EMOTHAW dataset, coupled with the proposed advanced model, not only advances the field of emotion detection but also sets a new standard in accuracy, particularly when considering the synergistic effects of both handwriting and drawing traits.These findings underscore the robustness of the proposed approach and its potential impact on applications ranging from mental health diagnostics to personalized well-being assessments.
In the second experiment, incorporating the diverse SemEval dataset enriched with emotional annotations from tweets, blog posts, and news articles, the study examined textbased emotion detection.The categorization of features into three distinct emotional states, angry, happy, and sad, unfolded insightful revelations.Notably, the proposed model showcased state-of-the-art performance on the test dataset, particularly excelling in the identification of sadness with a remarkable F1 score of 87.06%, as clarified in Table 2.Moreover, the proposed study achieved notable success in detecting happy and angry states, reaffirming the versatility of the model across various emotional dimensions.
In the SemEval dataset, a variety of text samples with emotional annotations are included, such as tweets, blog posts, and news articles.In this work, the features are categorized into three different emotional states, angry, happy, and sad.On the test dataset, the model produced state-of-the-art results for sad state identification, with an F1 score of 87:06%, as shown in Table 2.For happy state detection, we obtained the highest F1 score of 79:73% and for angry state detection we obtained the highest F1 score of 83:12%.To sum up, the SemEval dataset, coupled with the proposed model, opens new horizons in textbased emotion detection.The robust performance across different emotional states, especially the outstanding identification of sadness, highlights the adaptability and efficacy of the proposed work.These outcomes not only contribute to the academic discourse but also pave the way for practical applications in sentiment analysis across diverse textual genres.
Figure 2 showcases the robust performance of the proposed model.This high level of accuracy indicates the model's proficiency in learning from the training data and effectively generalizing it to new, unseen data during testing.The sustained elevation of both lines above 90% emphasizes the reliability and effectiveness of the proposed model in accurately predicting emotional states based on the provided features.To specifically address color differentiation issues, we have manually introduced circles to represent training accuracy and rectangles for testing accuracy.This visual distinction aims to enhance clarity and inclusivity for a diverse audience, mitigating potential challenges associated with color perception.2023) used a combination of temporal, spectral, and Mel Frequency Cepstral Coefficient (MFCC) approaches to extract characteristics from each signal and discover a link between the signal and the emotional states of stress, anxiety, and sadness.They classified the vectors of the generated characteristics using a Bidirectional Long-Short Term Memory (BiLSTM) network.They obtained a higher accuracy of 89:21% for depression detection using writing features.For anxiety detection, they achieved a higher accuracy of 80:03% using both drawing and writing features.For stress detection, they achieved a higher accuracy of 75:39% using drawing features.The study conducted in Nolazco-Flores et al. (2021) used the fast correlation-based filtering approach to choose the optimal characteristics.The retrieved features were then supplemented by introducing a tiny amount of random Gaussian noise and a proportion of the training data that was randomly chosen.A radial basis SVM model is trained and obtained the higher accuracy of 80:31% for depression detection.However, the proposed work achieved the highest accuracy of 92:64% for depression detection using both drawing and writing features.
Similarly, for anxiety detection, we achieved the highest accuracy of 83:22% using both drawing and writing features.For stress detection, we achieved the highest accuracy of 79:41% using writing features.
In contrast to prior studies, particularly the work by Esposito et al. ( 2020) employing emotion detection from text through the BiLSTM network, the proposed work, as shown in Table 4, exhibits noteworthy achievements using the SemEval dataset.While Esposito et al. ( 2020) achieved a commendable F1 score of 83.03% for sad state detection, the proposed study surpassed these results.We attained the highest F1 score of 83.12% for angry state detection, 79.73% for happy state detection, and an exceptional 87.06% for sad state detection.This underscores the efficacy and robustness of our proposed approach, establishing its superiority across diverse emotional states.
In the realm of emotion detection, various state-of-the-art approaches have been explored, as exemplified by studies such as Rahman & Halim (2023) 2021) leveraged combined features with a Support Vector Machine (SVM).The reported accuracy results from these studies ranged from 68.67% to 89.21%.Notably, the proposed model surpassed all these benchmarks, achieving an impressive 92.64% accuracy using the EMOTHAW dataset.This attests to the superior performance and efficacy of the proposed approach compared to existing state-of-the-art methods in the field.
Threats to analysis
The proposed approach for emotion detection from handwriting and drawing samples exhibits promising results, although certain inherent limitations merit consideration.Firstly, the model's performance is contingent on the quality and representativeness of the training dataset, with potential biases affecting generalizability.Additionally, the limited diversity in handwriting and drawing styles within the training data may impact the model's adaptability to extreme variations in individual expression.Cultural nuances in emotional expression pose another challenge, as the model's performance may vary across diverse cultural contexts.Dependency on machine translation tools for languages beyond the training set introduces potential errors, and the predefined set of emotions in focus might not capture the full spectrum of human emotional expression.Ethical considerations regarding privacy and consent in deploying emotion detection technologies add another layer of complexity.While these limitations are acknowledged, the proposed model serves as a foundational step, emphasizing the need for ongoing research and refinement to address these challenges and enhance overall robustness in varied contexts.
Discussion
Text-based ED is focused on the feelings that lead people to write down particular words at specific moments.According to the results, multimodal ED, such as voice, body language, facial expressions, and other areas, receive more attention than their text-based counterparts.The dearth has mostly been caused by the fact that, unlike multimodal approaches, texts may not exhibit distinctive indications of emotions, making the identification of emotions from texts significantly more challenging compared to other methods.Because there are no facial expressions or vocal modulations in handwritten text, this makes emotion detection a difficult challenge.The purpose of this work was to ascertain the level of interest in the field of handwritten text emotion recognition.To perform classification and analysis tasks, handwriting and drawing signals are processed using the feature extraction steps to isolate significant and informative attributes.In this study, we found that individual variances in handwriting characteristics were caused by their emotional moods.Incorporating more characteristics like pressure and speed into the input data, we saw that the attention-based transformer model obtained great accuracy.We observed that adding additional features can enhance the model's performance even more.
CONCLUSION
The drawing and handwriting signals are instances of time series data, which is shown as a collection of data points acquired over time.In this work, we extract time-domain, frequency-domain, and statistical-domain features from the handwriting and drawing signals.The proposed model has the benefit of being able to capture long-range relationships in the input data, which is especially beneficial for handwriting and drawing samples that contain sequential and spatial information.The model's attention mechanism also enables it to concentrate on relevant components and structures in the input data, which may enhance its capacity to recognize minor emotional signals.The hyperparameters that are adjusted during the testing of the model include the number of layers, attention heads, the dimensionality of embeddings, learning rate, and batch size.Concerning accuracy and F1 scores, the attention-based transformer model used in this study excelled on two benchmark datasets.
In the future, transfer learning techniques could be used to pre-train the attention-based transformer model on large datasets and fine-tune it for specific emotion detection tasks, which could potentially improve the model's performance on smaller datasets.
Algorithm 1
Emotion detection using attention-based transformer model.Input: Handwriting and drawing samples Output: Predicted emotion 1. Import required libraries and frameworks 2. Define the Transformer model architecture: -Let L represents the number of layers in the Transformer model.-Let d_model represent the dimension of the model.-Leth represent the number of attention heads.
a. Apply preprocessing steps to the i-th sample.b.Convert the preprocessed i-th sample into a suitable input format.c.Store the preprocessed i-th sample.5. Load the preprocessed samples.6. Convert the preprocessed samples into tensors.7. Perform inference using the loaded model: -Let M represents the maximum sequence length.-LetV represents the vocabulary size.-LetS represents the number of emotions.-Fori = 1 to N: a. Pass the preprocessed i-th sample through the Transformer model: -Encoder output: enc_output = Encoder(input) -Decoder output: dec_output = Decoder (target, enc_output) b.Apply a linear layer to the decoder output: logits = Linear(dec_output) c. Apply a softmax function to obtain the predicted probabilities: probabilities = Softmax(logits) d.Store the predicted probabilities for the i-th sample.8. Extract the predicted emotions: -For i = 1 to N: a. Extract the emotion with the highest probability from the i-th sample.9.Return the predicted emotion.End Khan et al. (2024), PeerJ Comput.Sci., DOI 10.7717/peerj-cs.1887
Table 1
Results obtained using the EMOTHAW dataset.
Table 4
Results comparison on SemEval dataset. | 8,566.6 | 2024-03-29T00:00:00.000 | [
"Computer Science"
] |
Purification of Curcumin from Ternary Extract ‐ Similar Mixtures of Curcuminoids in a Single Crystallization Step
: Crystallization ‐ based separation of curcumin from ternary mixtures of curcuminoids having compositions comparable to commercial extracts was studied experimentally. Based on solubility and supersolubility data of both, pure curcumin and curcumin in presence of the two major impurities demethoxycurcumin (DMC) and bis(demethoxy)curcumin (BDMC), seeded cooling crystallization procedures were derived using acetone, acetonitrile and 50/50 (wt/wt) mixtures of acetone/2 ‐ propanol and acetone/acetonitrile as solvents. Starting from initial curcumin contents of 67–75% in the curcuminoid mixtures single step crystallization processes provided crystalline curcumin free of BDMC at residual DMC contents of 0.6–9.9%. Curcumin at highest purity of 99.4% was obtained from a 50/50 (wt/wt) acetone/2 ‐ propanol solution in a single crystallization step. It is demonstrated that the total product yield can be significantly enhanced via addition of water, 2 ‐ propanol and acetonitrile as anti ‐ solvents at the end of a cooling crystallization process.
Introduction
Curcumin (abbreviated CUR), known as diferuloyl methane, is an intense orange-yellow solid and a natural ingredient of the plant rhizome of Curcuma Longa L. Two derivatives of CUR, demethoxycurcumin (abbreviated DMC) and bis(demethoxy)curcumin (abbreviated BDMC), can be found in the plant as well. Altogether they are known as curcuminoids (abbreviated CURD). Depending on the soil condition, the total content of CURDs in the plant rhizome varies between 2 and 9%. With approximately 70% of the total CURD content CUR represents the major component in turmeric [1][2][3]. As highlighted in Figure 1, the presence or absence of a methoxy functional group on o-position to a phenolic group represents the only difference in the chemical structure of the three CURDs. The molecular structure of CUR comprising two equally substituted aromatic rings linked together by a diketo group, which exhibits keto-enol tautomerism, plays a crucial role in the reactivity of CUR [4,5].
Studies show that CUR can be potentially used to treat over 25 diseases due to its anti-oxidative, immunosuppressive, wound-healing, anti-inflammatory and phototoxic effects [6][7][8]. These include, in particular, neurodegenerative diseases, such as Alzheimerʹs and Parkinsonʹs diseases, diabetes, heart sickness, bacterial, viral and fungal diseases, AIDS and over 20 different cancers [9][10][11][12]. In addition to CUR, also the potential use of DMC and BDMC in the prevention of cancer was emphasized [13][14][15]. It was reported that DMC has the stronger effect on the inhibition of human Due to the higher reactivity of CUR associated with the stronger pharmacological activity on the human body comparable to the two other derivatives, CUR currently remains the targeted turmeric compound [18]. Despite the diverse pharmacological effects, the practical insolubility of CUR in water results in a very low bioavailability of the molecule and therewith leads to a limited usage as a drug [19]. To improve the bioavailability, formulation of curcumin nanoparticles or metal complexes were successfully implemented [20,21]. In addition, the application of CUR together with artemisinin in a CUR-artemisinin combination therapy against malaria was reported to decrease the drug resistance [22]. Moreover, the formulation of a CUR-artemisinin co-amorphous solid showed a higher therapeutic effect in the treatment of cancer than the single drug formulation [23]. For each of the application, CUR has to be available in chemically pure form and in sufficient amount.
H.J.J. Pabon described the preparation of synthetic CUR and related compounds [24]. Kim et al. recently published a process for production of CURDs in engineered Escherichia coli [25]. Nevertheless, the separation of CUR by means of solvent extraction from the plant rhizome still represents the most economical way of CUR production. In addition to plant proteins, oils and fats, the final extract contains 80% of the ternary CURD mixture [26]. In this mixture CUR is the major component with approximately 64% share of the total CURD content, together with 21% DMC and 15% BDMC [27]. Commercially available mixture usually contains 77% CUR, 17% DMC and 6% BDMC [28]. Consequently, CUR has to be purified from the ternary mixture.
There are two methods for separation of CUR from the mixture of CURDs described in the literature: by means of column or thin layer chromatography and by crystallization from solution.
For the chromatographic separation of CUR, silica gel (untreated or impregnated with sodium hydrogen phosphate) is commonly used as a stationary phase and various binary solvent mixtures of dichloromethane, chloroform, methanol, acetic acid, ethyl acetate and hexane as the mobile phase [29]. At the end of the process, three chromatographic fractions are enriched with the three CURDs, respectively [30,31]. Usually crystallization is applied as the final formulation step providing the solid product with desired specifications.
In the last decade, crystallization as a single separation technique was studied to purify CUR from the ternary mixture of curcuminoids [32][33][34]. Processes were described exploiting anti-solvent addition or system cooling, using methanol, ethanol and 2-propanol as process solvents and water as anti-solvent (Table 1).
As summarized in Table 1, from initial CURD mixtures crystalline CUR with purities of 92.2%, 96.0% and 99.1% at overall yields between 40 and 50% were obtained. The used separation methods were implemented as multi-step processes consisting of at least two successive sub-steps. It is reported that the main part of BDMC could be depleted after the first separation step, full removal was achieved after the second crystallization step [33,34]. DMC was always present in the final product. Ukrainczyk et al. observed an exponential decrease of the removal efficiency of DMC with increasing number of successive crystallization steps [34]. In order to reach the desired product purity and also to improve the overall process yield, a combination of the two separation techniques, chromatography and crystallization, was recently studied. Horvath et al. successfully implemented this integrated process for recovery of 99.1% pure artemisinin from an effluent of a photocatalytic reactor with 61.5% yield [35]. Heffernan et al. demonstrated the purification of single CURDs from the crude curcumin extract. There, the firstly performed crystallization process comprised three crystallization cycles, which provided 99.1% pure CUR in the final crystalline product. In the second process step, the remaining mother liquor was processed by column chromatography to isolate DMC and BDMC with purities of 98.3% and 98.6% and yields of 79.7% and 68.8%, respectively [36].
As has been demonstrated for other natural product mixtures, crystallization is a powerful technique to isolate a target compound from a multicomponent mixture within a single crystallization step [37,38]. Due to the fact that a 98% minimum purity of CUR is already sufficient for further drug application in pharmaceutical preparations [22], this study is directed to develop a separation process for isolation of pure crystalline CUR from the ternary mixture of CURDs within a single crystallization step.
To separate a target compound from a multi-component mixture, seeded cooling crystallization is preferably applied. Anti-solvent is usually added either at the beginning of the cooling step to generate the supersaturation in the solution or at the end of the process to increase the overall crystallization yield [39].
To purify CUR from the crude CURD mixture, seeded cooling crystallization processes were designed on the basis of solubility and nucleation measurements of pure CUR and CUR in presence of the CURDs mixture components in acetone, acetonitrile, ethanol, methanol, 2-propanol and selected binary mixtures thereof. Finally, with respect to the solubility results, 2-propanol, acetonitrile and water were considered as anti-solvents to improve the overall process yield.
Materials
Solid standards of curcumin, demethoxycurcumin (both >98%, TCI Chemicals) and bis(demethoxy)curcumin (>99%, ChemFaces China) were used as standards for HPLC and X-ray powder diffraction (XRPD) analysis. The solid standard of curcumin was also used to determine the solubility and nucleation behaviors. During the study, four crude solid mixtures of CURDs were purchased from Sigma Aldrich and Acros. The content of CUR, DMC and BDMC in the solids, determined by means of HPLC, is summarized in Table 2.
The highest CUR content of 80.7% was found in the crude solid obtained from Acros. The CUR content in the crude solids from Sigma Aldrich varies between 67.2% and 75.0% depending on the purchased charge, but is most similar to that of plant extract [28]. Accordingly, the solids from Sigma Aldrich were used as crude mixture for crystallization experiments. It should be emphasized that the analyzed significant differences of the CUR content in the three solid charges made the implementation of the designed crystallization process more challenging. Acetone, acetonitrile, ethanol, methanol and 2-propanol (>99.8%, HiPerSolv CHROMANORM, VWR Chemicals, Germany) were used for solubility studies and for the crystallization experiments.
Analytical Methods
An analytical HPLC unit (Agilent 1200 Series, Agilent Technologies Germany GmbH) was used to characterize the solid standards, to quantify the CUR, DMC and BDMC contents in the crude mixtures as well as in the final crystallization products. The reversed phase method reported by Jadhav et al. [40] was adjusted as follows: the mobile phase composition was fixed to 50/50 (vol/vol) acetonitrile/0.1% acetic acid in water. Before usage water was purified via Milli-Q Advantage devices (Merck Millipore). The eluent flow-rate was set to 1 mL/min. Solid samples preliminarily dissolved in acetonitrile were injected (injection volume 1 μL) in the column (LUNA C18, 250 × 4.6 mm, 10 μm, Phenomenex GmbH, Germany, column temperature 25 °C) and analyzed at a wavelength of 254 nm. Figure 2 shows chromatograms of the solid standards of BDMC, DMC and CUR compared to a ternary mixture of CURDs (exemplarily crude solid No. 3). Table 1). X-ray powder diffraction (XRPD) was applied to characterize the purchased solid standards, solid fractions obtained during the solubility studies and the crystallization products. For the measurements, solid samples were ground in a mortar and prepared on background-free Si single crystal sample holders. Data were collected on an X`Pert Pro diffractometer (PANalytical GmbH, Germany) using Cu-Kα radiation. Samples were scanned in a 2Theta range of 4 to 30° with a step size of 0.017° and a counting time of 50 s per step.
Solubility and Metastable Zone Width Measurements
Solubility investigations of pure CUR in acetone, acetonitrile, ethanol, methanol and 2-propanol were carried out via the classical isothermal method [41]. To evaluate the impact of the main impurities (DMC and BDMC) on the solubility behavior of CUR, the crude mixture of CURDs no. 2 was used in selected process solvents. Suspensions containing excess of solid CUR and 5 mL solvent were introduced in glass vials. To guarantee efficient mixing of the prepared suspensions, vials were equipped with a magnetic stirrer and sealed. Samples were placed in a thermostatic bath and allowed to equilibrate at constant temperatures between 5 and 70 °C for at least 48 h under stirring. Afterwards, samples of equilibrated slurries were withdrawn with a syringe and filtered through a 0.45 μm PTFE filter. Obtained liquid phases were analyzed for solute content by HPLC. To preserve equilibrium conditions for low temperature samples, syringes and filters were precooled before usage. The corresponding wet solid fractions were characterized by XRPD.
Metastable zone width data of pure CUR in selected process solvents were acquired by means of the multiple reactor system Crystal16 TM (Avantium Technologies BV, Amsterdam). Suspensions containing known excess amount of solid in solvent were prepared in standard HPLC glass vials, equipped with magnetic stirrers and subjected to a heating step from 5 to 60 °C and a subsequent cooling step from 60 to −15 °C, both at a moderate rate of 0.1 °C/min. Temperatures of a "clear" and "cloud" point representing the respective saturation and nucleation temperatures were obtained via turbidity measurement.
Batch crystallization experiments were conducted in a jacketed 200 mL glass vessel equipped with a Pt-100 resistance thermometer (resolution 0.01 °C) connected to a thermostat (RP845, Lauda Proline, Germany) to control the system temperature. A magnetic stirrer was used for agitation.
With respect to the determined solubility behavior of CUR, four process solvents were selected. Consequently, four cooling crystallization processes were derived and conducted. Table 3 gives an overview of the chosen process solvents and the CURD mixtures to be separated. Exact solution composition data (Table 5) and the applied crystallization procedures are presented and discussed in connection with crystallization process design in Sections 3.2 and 3.3.
Selection of Solvents for Crystallization
To design a crystallization-based purification process, the selection of an appropriate solvent is crucial. The operation parameters for the crystallization process are established based on the specific solubility and nucleation behavior of the target compound in the corresponding solvent.
3.1.1. CUR Solubility in Acetone, Acetonitrile, Methanol, Ethanol and 2-propanol Acetone, acetonitrile, methanol, ethanol and 2-propanol were selected as possible process solvents because of their low toxicity. CUR solubilities determined in these solvents are shown in Figure 3. As seen CUR solubilities increase with increasing temperature in all solvents. Compared to acetone, CUR is significantly less soluble in the other solvents (less than 1 wt%, except in acetonitrile at 40 °C). Hence, acetone was chosen as a suitable solvent for seeded cooling crystallization and acetonitrile, methanol, ethanol and 2-propanol were considered as potential anti-solvents. According to the published very poor solubility of CUR in water (approx. 1.3 × 10 −7 wt% at 25 °C) water was also taken into account as anti-solvent without extra solubility studies [20]. In Figure 4, CUR solid phase XRPD patterns are shown obtained from isothermal equilibration of CUR suspensions (Figure 4a), and by (polythermal) cooling of saturated CUR/solvent mixtures (Figure 4b-f). Measured patterns are compared with references for the three CUR polymorphs derived from single crystal data given in the Cambridge Structural Database (CSD) [42].
Commercial solid standard of CUR, which represents the initial solid for isothermal solubility studies, and all CUR solid phases obtained in equilibrium with saturated solutions in the solvents studied (Figure 4a) perfectly match the pattern of the known CUR polymorph I. XRPD patterns obtained for CUR recrystallized polythermally from acetone and acetonitrile solutions (Figure 4b,c) can be assigned to CUR I as well. CUR phases obtained by cooling of saturated methanol, ethanol and 2-propanol solutions (Figure 4d-f) do not match any shown reference phase, but (except a small missing reflex at 6.8° in the ethanol pattern) are identical to each other. Aside from that, their XRPD patterns differ from the CUR I phase only by some additional reflexes in the 2Theta range of 6°-8°. One hypothesis explaining this behavior might be incorporation of small amounts of respective alcohol molecules in the crystal structure without changing the structure type. Further, according to the known complex solid phase behavior of CUR [42][43][44][45][46][47][48] and BDMC [49,50], also the formation of a new metastable form of CUR in the three alcohols or a solvate phase from ethanol are possible explanations. Since elucidation of the CUR phase behavior was not the main focus of the present study, this issue has to be verified in future investigations.
With the aim to selectively crystallize pure CUR (form I) from the crude CURD solution and to suppress spontaneous nucleation of undesired DMC and BDMC components, seeding with CUR solid standard (form I) was applied in cooling crystallization experiments. To evaluate the anti-solvents effect on the CUR solubility in acetone, saturation concentrations of CUR (solid standard) were measured at 30 °C in the 50/50 (wt/wt) acetone/anti-solvent mixtures, exemplarily. Figure 5 shows that the obtained solubility data of CUR in the four binary solvent mixtures deviate from the ideal linear behaviors. Moreover, it is seen that the addition of methanol, ethanol and 2-propanol induces a dilution effect rather than the expected supersaturation of the solution. Since the relative dilution effect of ethanol and methanol is larger than that of 2-propanol, they are not considered further for crystallization process design. In contrast, the addition of acetonitrile increases the supersaturation of CUR in acetone. Therefore, a high product yield can be expected. Consequently, the following four process solvents were selected to conduct the seeded cooling crystallization of CUR: pure acetone and acetonitrile as well as 50/50 (wt/wt) mixtures of acetone/2-propanol and acetone/acetonitrile.
Effect of DMC and BDMC on CUR Solubility in the Selected Process Solvents
Before designing seeded cooling crystallizations, the solubility behavior of CUR was evaluated in presence of the main impurities DMC and BDMC in acetone, 50/50 acetone/2-propanol, 50/50 acetone/acetonitrile and acetonitrile. In Figure 6, the resulting solubility data are compared with the solubility values of pure CUR in the respective solvents.
As seen the solubilities of CUR in presence of DMC and BDMC slightly exceed those of pure CUR in the four solvents. Moreover, comparison of the CUR solubility in 50/50 acetone/2-propanol and 50/50 acetone/acetonitrile shows that with the use of acetonitrile as anti-solvent, a higher supersaturation of CUR in the solution can be obtained resulting in a higher product yield. This observation confirms the behavior of pure CUR in the binary solvents discussed in Figure 5.
Design of the Seeded Cooling Crystallization for Separation of CUR
Based on the solubility curves of CUR in presence of the main impurities and the observed nucleation behavior of pure CUR in the respective solvents, four seeded cooling crystallization processes were derived to separate CUR from the CURD mixtures as illustrated in Figure 7.
The starting temperatures of the crystallization processes in the 50/50 mixtures of acetone/2-propanol and acetone/acetonitrile and in acetonitrile were set at 60 °C. To avoid uncontrolled evaporation of acetone, 45 °C was chosen as the starting temperature in this solvent.
The temperatures at which seeds of pure CUR (form I) were introduced into the acetone, 50/50 acetone/acetonitrile and acetonitrile solutions were chosen to be at least 5 K below the saturation temperature of CUR (approximately in the first third of the metastable region). However, the metastable region of CUR in 50/50 acetone/2-propanol (Figure 7, purple lines) is significantly closer than for the other three solvent systems. Therefore, the seeds were added at approximately half of the metastable region. An overview of the selected process parameters for the four seeded cooling crystallization processes is given in Table 4. The initial concentrations of CUR in the crude CURD mixtures were selected in accordance with the set starting temperatures to guarantee undersaturation of CUR in the starting solutions. The amounts of the crude solids, the process solvents and the calculated initial CUR content in the four starting solutions are listed in Table 5.
Implementation of the Purification Process
In the first step, the four initial crude solutions were prepared using the corresponding amount of the crude solid mixture in the respective solvent ( Table 5). The seeded cooling crystallization of CUR was conducted in a second step following the four process trends shown in Figure 7. Starting at set temperatures, the unsaturated clear solutions were cooled down to 0 °C at a linear rate of 10 K/h. After exceeding the corresponding saturation temperature, seed crystals of pure CUR form I (ca. 50 mg) were introduced into the supersaturated solution at Tseeds (Table 4). At the end of the cooling process at 0 °C, the obtained product suspensions were stirred for further 0.5 hours. Subsequently, solid-liquid phase separation was carried out on suction filters (pore size of filter paper 0.6 μm). To remove adhering mother liquor from the filter cake, the collected crystals were washed with about 100 g of cold acetone (< 0 °C, in processes 1-3) or with acetonitrile (< 0 °C in process 4). Then, dried at 40 °C, the purity of CUR and the yield were analyzed. During the washing process with acetone a visible dissolution of the filter cake was observed, caused by the high solubility of CUR in acetone (about 5 wt% at 0 °C). Accordingly, lower product yields could be assumed in the processes of using acetone as washing solvent (in processes 1-3).
The results of the four conducted seeded cooling crystallizations are summarized in Table 6. The maximum thermodynamically possible yield of CUR ηTD was calculated according to Equation (1), the total product yield of CUR η according to Equation (2). Table 6 shows that in the 50/50 acetone/2-propanol mixture (process 2), the highest purity of CUR (99.4%) in the crystalline product was achieved. However, only 13% of the initial CUR content in the crude mixture was recovered. Crystalline CUR with decreasing purity of 95.7%, 92.3% and 90.1% but increasing total product yields of 31%, 55% and 62% was obtained from acetone, acetonitrile and 50/50 acetone/acetonitrile, respectively. The lower total yields from acetone and acetone/2-propanol solutions are partly associated with the enhanced CUR solubility at the final process temperature compared to the acetonitrile-containing solutions (see Figure 7), which, however, does not explain the extremely low yield achieved in the latter case. The obtained purity results further verify that BDMC could completely be removed from the crystalline products within a single separation step, while the content of DMC was noticeably reduced. The presence of DMC as impurity in the products can be probably attributed to the most similar molecular structure of DMC and CUR (Figure 1). It can be postulated that DMC molecules compete with CUR in the solution upon forming the main crystal lattices. To ascertain, whether DMC is present near CUR in the crystalline form or as amorphous phase, the four crystallization products were analyzed by XRPD. In Figure 8 the corresponding patterns are compared with the commercial solid standards of DMC and CUR. Since all XRPD reflexes in the diffractograms from the crystalline products can be clearly distinguished and are uniformly on the baseline, the presence of an amorphous fraction in the solid products cannot be confirmed. Moreover, all XRPD patterns seem to be identical to the CUR solid standard. Despite the increasing DMC content in the crystalline products (0.6-9.9%), none of the recorded patterns can be clearly assigned to the solid standard of DMC. Only a slight shift of single reflexes of crystalline products is indicated with increased DMC content in the solids.
Due to the strong similarity of CUR and DMC molecules, partial miscibility at the solid state might be a possible explanation here. However, dependent on the instrument and the structural similarity of the compounds used the limit of detection of the XRPD method is known to be 5-7 wt% and 1 wt% in best cases. Thus, incorporation of DMC molecules is not readily assessable by XRPD at these low contents. Clarifying this issue requires further work which was out of the scope of this paper.
Improvement of the Total Yield by Means of Anti-Solvent Addition
To increase the product yield of the seeded cooling crystallization, it is suitable to conduct anti-solvent addition to the product suspension at the end of the cooling step [39]. In this work, to improve the overall yields of processes 1, 2 and 4 addition of water, 2-propanol and acetonitrile as anti-solvents of CUR in acetone was investigated. The final solvent/anti-solvent ratio was set to 25/75 (wt/wt). The study was conducted using the crude solid no. 1 with the lowest CUR content of 67.2%.
In the first step three equal starting solutions containing 8.5 wt% CUR in acetone were prepared and three identical seeded cooling crystallization processes were carried out as previously described in process no. 1. When the set end temperature of the cooling profile (0 °C) was reached, cold anti-solvent (< 0 °C) 2-propanol (process 1-2) and acetonitrile (process 1-3) was added to the product suspension, respectively (see Table 7). Afterwards the system was stirred for 3 hours at constant 0 °C. The addition of water (process 1-1) was carried out at 26 °C after introducing CUR seeds to the supersaturated acetone solution. Then the suspension was cooled down to the end temperature 0 °C and stirred also for 3 hours. Solid-liquid phase separation was performed at the end of each anti-solvent crystallization process. The crystalline products were dried at 40 °C and the CUR purity and yield analyzed. To maintain the total product yield and avoid the previously observed dissolution of the filter cake during washing with cold acetone, the washing step was skipped. The results obtained are summarized in Table 7. Table 7. Overview of the results to improve total product yield (η(CUR)) via anti-solvent addition. In all processes, addition of anti-solvent to the product suspension at the end of the cooling step led to a significant increase of yields. However, the CUR purity in the final crystalline products was noticeably decreased. DMC and in addition low amounts of BDMC (≤ 1.1%) were present as impurities. In addition to a detrimental effect of abstaining from product washing, an anti-solvent effect on DMC and BDMC cannot be excluded here (even solubility of DMC and BDMC is reported to exceed that of CUR in acetonitrile and isopropanol [34]). However, similar to the cooling crystallization (Table 6), the use of 2-propanol as anti-solvent (process 1-2) provided CUR at highest purity (96.2%) but at significantly lowest yield (36%). Regarding the reduced yield in presence of 2-propanol it can only be presumed at this stage, that, as indicated in polythermal (non-seeded) solubility studies from 2-propanol, an additional metastable (and thus higher soluble) polymorph or solvate phase occurs which causes the respective CUR remaining in the solution phase and thereby reducing the CUR yield in the solid phase.
Conclusions
In this work, the solubility behavior of pure CUR and CUR in presence of the two main impurities DMC and BDMC as well as supersolubilities in acetone, acetonitrile, methanol, ethanol, 2-propanol and their binary mixtures were investigated first. Based on the data obtained, seeded cooling crystallizations in four different process solvents (acetone, acetonitrile and 50/50 (wt/wt) mixtures of acetone/2-propanol and acetone/acetonitrile) were designed and implemented. As a result, the purity of CUR could be increased from initial CUR contents of 67-75% in the curcuminoid mixtures up to values of 90.1-99.4% in a single crystallization step. All crystallization processes provided crystalline curcumin (form I) free of BDMC after this single crystallization step. DMC was significantly depleted from initial contents of 19.2-25.5% in the crude mixtures to residual contents of 0.6-9.9%. Total product yields were significantly enhanced to 70-79% via addition of water and acetonitrile as anti-solvents at the end of the cooling crystallization process.
The presence of crystalline or amorphous DMC in the CUR products could not be detected by XRPD analysis. Whether this is caused by experimental detection limits or by potential formation of CUR/DMC mixed crystals has to be clarified in future studies.
Based on the work presented, a seeded cooling crystallization from a 50/50 (wt/wt) acetone/2-propanol solvent mixture is seen as the best purification strategy providing CUR at highest purity of 99.4%, BDMC free in a single crystallization step. However, there is still space for process optimization in particular with respect to yield. This includes application of a reduced cooling rate and a lowered final cooling temperature to increase both crystallization and total yield. Further, to avoid product losses in downstream processing washing the product with an acetone/anti-solvent mixture (for example acetonitrile) is suggested. No information regarding the maximum admissible limit of BDMC and DMC in the crystalline CUR was found in the literature. However, in any case the CUR purification grade obtained within a simple single crystallization step in this study represents a significant improvement compared to alternative process concepts. | 6,286 | 2020-03-16T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
A Customized Bolus Produced Using a 3-Dimensional Printer for Radiotherapy
Objective Boluses are used in high-energy radiotherapy in order to overcome the skin sparing effect. In practice though, commonly used flat boluses fail to make a perfect contact with the irregular surface of the patient’s skin, resulting in air gaps. Hence, we fabricated a customized bolus using a 3-dimensional (3D) printer and evaluated its feasibility for radiotherapy. Methods We designed two kinds of bolus for production on a 3D printer, one of which was the 3D printed flat bolus for the Blue water phantom and the other was a 3D printed customized bolus for the RANDO phantom. The 3D printed flat bolus was fabricated to verify its physical quality. The resulting 3D printed flat bolus was evaluated by assessing dosimetric parameters such as D1.5 cm, D5 cm, and D10 cm. The 3D printed customized bolus was then fabricated, and its quality and clinical feasibility were evaluated by visual inspection and by assessing dosimetric parameters such as Dmax, Dmin, Dmean, D90%, and V90%. Results The dosimetric parameters of the resulting 3D printed flat bolus showed that it was a useful dose escalating material, equivalent to a commercially available flat bolus. Analysis of the dosimetric parameters of the 3D printed customized bolus demonstrated that it is provided good dose escalation and good contact with the irregular surface of the RANDO phantom. Conclusions A customized bolus produced using a 3D printer could potentially replace commercially available flat boluses.
Introduction
Since the discovery of X-rays over one hundred years ago, radiotherapy has been used for the treatment of tumors. In order to deliver a sufficient radiation dose to the tumor, adequate types of radiation are selected depending on the tumor location. Conventionally, high-energy photon is used to treat deeply located lesions and electron is used for the treatment of superficial lesions such as skin cancer.
The International Commission on Radiation Units and Measurements Report 62 recommends that the target volume be encompassed within the area that receives at least 95% of the prescribed dose when radiotherapy is administered [1]. However, a sufficient dose cannot be delivered to the surface due to the skin sparing effect of high-energy radiation beams. To avoid this limitation, several types of commercially available boluses are often used [2]. These bolus materials should be nearly tissue equivalent and allow a sufficient surface dose enhancement.
Despite the advent of commercial boluses and the modernization of clinical equipment, uncertainties in the preparation and utilization of a bolus remain [3]. In practice, most commonly used commercial flat boluses cannot form perfect contact with the irregular surface of the patient's skin, particularly the nose, ear, and scalp, and the resulting air gap affects the second skin sparing effect and reduces both the maximum and surface dose [4][5][6][7][8]. Even more problematic though, is that the depth of the air gap cannot be anticipated and thus accounted for in the treatment planning step, leading to a discrepancy between the planned and delivered dose. Thus, commercial flat boluses need to be used with great care, especially when the skin has a particularly irregular shape.
Recently, there have been significant advances in 3-dimensional (3D) printer technology, and attempts have been made to utilize them in medicine [9,10]. In this study, we fabricated a customized bolus using a 3D printer and assessed whether it could overcome the disadvantages of currently used commercial flat boluses.
Bolus fabrication using a 3D printer
For this study, we used the Blue water phantom (Standard Imaging, Middleton, WI) as a homogenous phantom and the RANDO phantom (Radiology Support Devices, Long Beach, CA) as an anthropomorphic phantom. Computed tomography (CT) images of the Blue water phantom and the RANDO phantom were obtained using a LightSpeed RT 16 CT scanner (GE Medical Systems, Waukesha, WI) in the general digital imaging and communications in medicine (DICOM) format ( Figure 1a). The CT scanning conditions were as follows: slice thickness, 1.25 mm; peak voltage, 120 kVp; current, 440 mA (Auto); noise index, 7.35; pitch, 0.938; and display field of view, 30 cm. Eclipse ver. 8.9 (Varian Medical Systems, Palo Alto, CA) with the Anisotropic Analytical Algorithm was used as a treatment planning system (TPS). The conditions for body contouring were 2500 Hounsfield units (HU) without the use of the smoothing option.
Based on these CT images, we fabricated a 3D printed flat bolus on the Blue water phantom to verify its physical quality. To fully cover the 10610 cm open field, the 3D printed flat bolus was sized at 11611 cm ( Figure 1c, bottom), with a thickness of 1 cm in order to escalate the dose at the buildup region of a 6 megavoltage (MV) photon beam. The RANDO phantom was used to fabricate a 3D printed customized bolus, for which a virtual target volume was delineated below the surface of the phantom, and the size of the target volume was 464 cm around its nose (Figure 1b, top). To completely cover the 5 cm65 cm open field, the bolus was sized at 767 cm ( Figure 1c, top), and the treatment plan was designed to cover 90% of the target volume with 90% of the prescribed dose. In order to meet this condition, the printed bolus was made 1 cm thick.
Although the bolus was designed using Eclipse, the 3D-rendered structure cannot be directly converted into the stereolithography (STL) format. Therefore, we performed the following additional steps. OsiriX MD ver. 2.8.x (OsiriX, Geneva, Switzerland) was used for 3D rendering of the designed bolus structure in DICOM-RT format. In order to convert the file into the STL format, 3Ds Max 2013 (Autodesk, San Rafael, CA) was used (Figure 1d). Insight ver. 9.1 (Stratasys, Eden Prairie, MN) was used to print out the STL file of this designed bolus on a Fortus 400 mc 3D printer (Stratasys, Eden Prairie, MN). The Fortus 400 mc is a fused deposition modeling technique 3D printer, and we used a 0.254 mm layer deposition (variation: 60.127 mm per mm) for printing. The time required for 3D printing of the flat bolus and customized bolus was 3 and 4.5 hours, respectively. The bolus material was ABS-M30 (Stratasys, Eden Prairie, MN), a form of acrylonitrile butadiene styrene that is commonly used by these devices. This material has a density of 1.04 g/cm 3 and 2123.6618.2 HU at 120 kVp. A schematic representation of this process is shown in Figure 1.
Evaluation of the 3D printed bolus
In order to evaluate the 3D printed bolus, treatment plans were generated for the Blue water phantom without a bolus, with a superflab bolus, and with the 3D printed flat bolus. All of the plans were set at 200 monitor units (MU) with a single 6 MV photon beam. The parameters of the treatment plan were a 0 degree gantry angle, a 6 MV photon beam, a 10610 cm, open field, a 100 cm surface to source distance (SSD), and a 0.25 cm calculation grid. For absolute dose measurement, a farmer type ionization chamber (Exradin A19 ionchamber, Standard Imaging, Middleton, WI) and SuperMAX electrometer (Standard Imaging, Middleton, WI) were used. For dose profile measurement, Gafchromic EBT2 film (International Specialty Products, Wayne, NJ) was used. An Epson Perfection V700 Photo Scanner (Epson, Long Beach, CA) was used to determine the optical density of the films and ImageJ ver. 1.47 v (National Institutes of Health, Bethesda, MD) was used for the film analysis. The measurement parameters were the same as those of each plan. All plans were compared with the percent depth dose (PDD) measured from the film and TPS, and depth doses (D d ) measured from the ionization chamber at the central axis. The dosimetric parameters are defined below.
Treatment plans were then generated for the RANDO phantom without a bolus and with the 3D printed customized bolus. The parameters of the treatment plan were a 0 degree gantry angle, a 6 MV photon beam, a 565 cm, open field, a 100 cm SSD, and a 0.25 cm calculation grid. The plan with the 3D printed customized bolus was set so that 90% of the prescribed dose was delivered to 90% of the target volume, and the plan without a bolus and the plan with the 3D printed customized bolus were normalized to the same maximum dose of the target volume. Both plans were compared in terms of the percent depth dose (PDD) at the central axis and the dose volume histogram (DVH) of the target volume. The D max , D min , D mean , D 90% , and V 90% of the treatment plans were compared. These dosimetric parameters are defined below: 1) d max : depth of maximum dose from the surface of the phantom
Results
The 3D printed flat bolus on the blue water phantom The 3D printed flat bolus was successfully fabricated using the 3D printer (Figure 1e, bottom), and was a good fit against the surface of the Blue water phantom with no air gap between the bolus and the phantom. The dose distribution of the plan without a bolus revealed that the prescribed dose could not be fully delivered to the surface of the Blue water phantom (Figure 2a). The d max of this plan was calculated with a TPS of 1.48 cm, whereas the plan with the 3D printed flat bolus produced a d max of 0.63 cm. Thus, the d max of the plan with the 3D printed flat bolus was shifted 0.85 cm in depth towards the surface of the Blue water phantom, suggesting that the 3D printed flat bolus was also a useful dose escalating material in radiotherapy. Moreover, the dose distributions of the plans with the superflab and the 3D printed flat bolus were similar, as expected (Figure 2b, c). The differences between the calculated dose by the TPS and the measured dose from the ionization chamber at a depth of 1.5 cm, 5 cm, and 10 cm beneath the surface of the phantom are shown in Table 1.
The differences between the calculated and measured doses were less than 1%, indicating that the dose distribution can be calculated to a very high degree of accuracy when the 3D printed bolus was applied. The PDD from the TPS and film dosimetry are shown in Figure 3, and the shapes of the plots of the calculated PDD from the TPS are similar to those for the measured PDD from the film. Figure 4a shows the 3D printed customized bolus produced using the 3D printer, and Figure 4b shows it positioned on the surface of the RANDO phantom. On visual inspection, the 3D printed customized bolus was found to fit well against the surface of the RANDO phantom, and this was verified in cross section (Figure 4c, d) and by CT imaging (Figure 4e, f).
The 3D printed customized bolus on the RANDO phantom
For the RANDO phantom study, the dose distributions of the plans without a bolus and with the 3D printed customized bolus on the RANDO phantom are shown in Figure 5, indicating that the 3D printed customized bolus is a good buildup material. For the plan without a bolus, the D max , D min , D mean , D 90% , and V 90% of the target volume were 101.3%, 25.4%, 86.4%, 62.7%, and 53.5%, respectively. This suggests that the plan without a bolus cannot fully deliver the prescribed dose to the target volume. However, when the 3D printed customized bolus was added, the D max , D min , D mean , D 90% , and V 90% of the target volume were 101.3%, 90.0%, 95.5%, 91.6%, and 100.0%, respectively, indicating effective dose coverage. Each dosimetric parameter is shown in Table 2, and the PDD and DVH of both plans are illustrated in Figure 6. Both the PDD and DVH show sufficient dose escalation in the target volume with the 3D printed customized bolus.
Discussion
The aim of radiotherapy is to deliver a sufficient radiation dose to a defined tumor, whilst minimizing the dose to the surrounding healthy tissue. High-energy photon is widely used in modern radiotherapy. However, it exhibits a skin sparing effect derived from the buildup region. This is regarded as advantageous when the tumors are in a deep location as damage to the skin and its resulting complications are avoided. On the other hand, if the tumors are superficial, the skin sparing effect reduces the tumor dose and could result in treatment failure. For the treatment of tumors on or near the skin, the skin sparing effect needs to be overcome in order to reduce the risk of recurrence. In order to achieve this, a bolus is placed on the patients' skin. However, commonly used flat bolus materials cannot make perfect contact with this irregular surface, leaving an unwanted air gap between the two. Butson et al. reported that approximately 6-10% of the surface dose, depending on the field size and angle of incidence, was reduced when using a 6 MV photon beam in the presence of a 10-mm air gap [4]. Khan et al. have also studied the dose perturbations of a 6 MV photon beam. They found that the surface dose is significantly affected by air gaps greater than 5 mm [5]. In the case of electron beams, several studies have investigated the dose reduction resulting from air gaps, with similar results to those obtained with photon beams [7,8]. However, an air gap might be unavoidable in the routine daily patients' setup. Even more problematic is that the depth of the air gap cannot be anticipated and thus calculated at the treatment planning step. As a result, there might be a discrepancy between the planned and delivered doses.
In this study, we fabricated a 3D printed flat bolus and evaluated its properties as a bolus material. As shown in Figure 2c and Table 1, the 3D printed flat bolus can provide effective dose coverage in the buildup region. The d max of the plan with the superflab and 3D printed flat bolus were shifted toward the surface of the Blue water phantom by as much as 0.91 cm and 0.85 cm, respectively. There were slight differences between the dosimetric results obtained using these boluses because the 3D printed flat bolus is not identical to the superflab bolus with respect to its HU value and density. At 120 kVp, the HU of the 3D printer bolus was 2123.6618.2 HU compared to was 233.0467.6 HU for the superflab bolus. In addition, the commercially available flat boluses and the 3D printed flat bolus also do not have completely homogeneous HU values, potentially giving rise to variation in the measured doses at the central axis.
We fabricated a customized 3D bolus using a 3D printer and evaluated its feasibility in clinical practice by comparing its performance with treatment plan without a bolus. As shown in Figure 4, the 3D printed bolus is a good fit against the irregular surface of the RANDO phantom, and the resulting dosimetric parameters of the plan without a bolus and with the 3D printed customized bolus on the surface of the RANDO phantom indicated that the 3D printed customized bolus is a good buildup material. Furthermore, the treatment plan with the 3D printed customized bolus could be clinically effective, help to overcome the problem of variable air gaps, and improve reproducibility of daily setup conditions on irregular surfaces compared to commercial flat boluses.
Conclusions
The customized bolus produced by a 3D printer could potentially replace and improve upon commercially available flat boluses. The 3D printed boluses can increase the reproducibility of daily setup and help overcome some of the disadvantages of currently used commercially available flat boluses. The 3D printed bolus could therefore also increase the efficacy of the radiotherapy. | 3,673.6 | 2014-10-22T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Effect of Graphene Nanoplatelets (GNPs) on Fatigue Properties of Asphalt Mastics
To investigate the effect of graphene on the fatigue properties of base asphalt mastics, graphene nanoplatelets (GNPs)-modified asphalt mastics and base asphalt mastics were prepared. A dynamic shear rheometer (DSR) was used to conduct the tests in the stress-controlled mode of a time-sweep test. The results showed that GNPs can improve the fatigue life of asphalt mastic. Under a stress of 0.15 MPa, the average fatigue life growth rate (ω¯) was 17.7% at a filler-asphalt ratio of 0.8, 35.4% at 1.0, and 45.2% at 1.2; under a stress of 0.2 MPa, the average fatigue life growth rate (ω¯) was 17.9% at a filler-asphalt ratio of 0.8, 25.6% at 1.0, and 38.2% at 1.2. The growth value (ΔT) of fatigue life of GNPs-modified asphalt mastics increased correspondingly with the increase of filler–asphalt ratio, the correlation coefficient R2 was greater than 0.95, and the growth amount showed a good linear relationship with the filler–asphalt ratio. In the range of 0.8~1.2 filler–asphalt ratio, the increase of mineral powder can improve the fatigue life of asphalt mastics, and there is a good linear correlation between filler–asphalt ratio and fatigue life. The anti-fatigue mechanism of GNPs lies in the interaction between GNPs and asphalt, as well as its own lubricity and thermal conductivity.
Introduction
Asphalt pavements will be subjected to repeated loads of vehicles during use, which makes fatigue damage one of the main forms of pavement diseases. As the main binder of the pavement material, the fatigue failure of the pavement is mainly attributed to the fatigue cracking of asphalt. First of all, under the initial load, the micro-cracking of asphalt gradually expands to macro-cracking with the continuous application of load, which makes the interface between aggregates break each other, and finally shows the pavement cracking [1,2].
The research on graphene-modified asphalt conducted by road materials researchers based on the excellent properties of graphene material itself has made some progress, but it is still at a relatively early stage. Marasteanu et al. [3] studied the influence of graphene addition on the compaction performance of asphalt mixture. Research shows that graphene can be used as a "lubricant", which can effectively reduce the compression force of the asphalt mixture. Without graphene, it takes about 60 revolutions to compact the mixed porosity to 5%, while after adding graphene at 28% of the asphalt mass, the number of resolution is only 20. Zhou et al. [4] studied the influence of graphene on the thermodynamic properties of asphalt by molecular simulation and experimental comparison, and studied its thermal stability and glass transition temperature by differential scanning calorimetry; The results show that the glass transition temperature Tg of the graphene-modified asphalt changes by 251 K and 276 K, respectively, for asphalt and modified asphalt. Indicating that graphene can improve the Tg and thermal properties. Moreno-Navarro et al. [5] conducted heat conduction on asphalt and graphene modified asphalt samples, and measured that the time required for the samples to rise to 5 • C was 182 s and 101 s, respectively, indicating that graphene can improve the thermal conductivity of materials. To solve the problems of poor compatibility and low temperature performance of graphene-modified asphalt, Han et al. [6] prepared graphene (GNPs) grafted polystyrene (Ps) composite by in situ polymerization, and obtained modified SBS, which was compounded with matrix asphalt to obtain PS-GNPs/SBS modified asphalt. The results show that the addition of PS-GNPs can effectively improve the compatibility, plasticity, viscoelasticity, rutting resistance at high temperature, fatigue resistance and low temperature performance of the material. Han et al. [7] addressed the problem of poor compatibility affecting the modification effect of graphene (GNPs) incorporated into SBS-modified asphalt by using a covalent bonding method to graft octadecylamine (ODA) onto the surface of GNPs to obtain ODA-GNPs complexes. ODA-GNPs and SBS modifier were used to prepare ODA-GNPs/SBS modified asphalt. The results show that ODA grafting enhances the lipophilicity of GNPs in asphalt, which leads to better dispersion effect. This further effectively improves the plasticity, high and low temperature performance and viscosity of the base asphalt. Li et al. [8] compared the performance of PS-GNPs/SBS modified asphalt with that of ODA-GNPs/SBS modified asphalt. Through a dynamic shear rheometer (DSR), multiple stress creep recovery test (MSCR), bending beam rheometer (BBR), time scanning, Marshall water stability, freezethaw splitting and a rutting test, it was shown that the mechanical properties and water damage resistance of PS-GNPs/SBS modified asphalt were better than those of ODA-GNPs/SBS modified asphalt. The fatigue life N f50 of PS-GNPs/SBS modified asphalt with 0.02% GNPs content was 272.79% higher than that of the original SBS asphalt; the fatigue life N f50 of ODA-GNPs/SBS modified asphalt with 0.08% GNPs content was 247.19% higher than that of the original SBS asphalt. Su et al. [9] prepared aminated graphene (NH 2-GNPs/D-PAN) fiber modified asphalt by using dopamine self-polymerization and aminated graphene covalent grafting to modify polyacrylonitrile (PAN) fibers. It was shown that NH2-GNPs/D-PAN asphalt had better viscoelasticity, permanent deformation resistance, low-temperature crack resistance and water damage properties than the original fiber asphalt. Guo et al. [10] improved the performance of asphalt through the synergistic enhancement of graphene and tourmaline. Graphene/tourmaline compositemodified asphalt was prepared, and its rheological properties are mainly studied. The results showed that the rutting resistance of graphene/tourmaline composite-modified asphalt was much higher than that of tourmaline-modified asphalt. Li et al. [11] used polymethylmethacrylate (PMMA) and GNPs to prepare composite PMMA-GNPs by microwave heating emulsion polymerization, and then prepared PMMA-GNPs/SBS modified asphalt together with SBS. The results showed that microwave heating could significantly shorten the reaction time compared with traditional water bath heating or oil bath heating. Compared with SBS modified asphalt, the addition of PMMA-GNPs could enhance the rutting resistance, reduce the sensitivity to stress changes and improve the storage stability at high temperature. Chen et al. [12] prepared GNPs/rubber powder composite modified asphalt with graphene (GNP S ) as a modifier in order to improve the compatibility of rubber powder-modified asphalt. The results showed that the addition of graphene could improve the compatibility and adhesion of rubber powder-modified asphalt. As well as improving the high and low temperature properties and viscoelasticity of rubber powder-modified asphalt. It was proposed that GNPs/rubber powder composite modified asphalt is suitable for heavy-duty traffic. Huang et al. [13] used dipropylene glycol dimethyl ether (DME) and polyvinylpyrrolidone (PVP) as dispersants to prepare graphene mother liquor (GML). Based on the Lambert-Beer Law, the amount of graphene (G) and the ratio of dispersant to graphene (D/G) were obtained. GML and asphalt were compounded to prepare graphene modified asphalt DME-GMA and PVP-GMA respectively. Studies have shown that the dispersant can improve the dispersion effect of graphene in the base asphalt, and the dispersion effect of PVP is better than that of DME. Li et al. [14] used commercial graphene as a modifier without any treatment, and prepared different amounts of graphene modified asphalt. Its physical properties, anti-aging properties and rheological properties were analyzed. It was shown that the addition of graphene improved the high temperature performance of the material, and the graphene flakes were able to hinder and prevent the diffusion of oxygen into the asphalt, which resulted in excellent antiaging properties. The modification mechanism and anti-aging mechanism were studied by micro-characterization, atomic force microscopy (AFM) and Fourier transform infrared spectroscopy (FTIR). It was shown that no new absorption peaks appeared in the FTIR spectra of the graphene-modified asphalt, indicating that graphene is physically dispersed in the asphalt and does not react chemically.
In a review of related graphene modified asphalt, Wu et al. [15] summarized the influence of graphene on asphalt binder from three aspects: rutting resistance (high temperature performance), fatigue cracking resistance (medium temperature performance) and thermal cracking resistance (low temperature performance). It was proposed that future research focus of graphene modified asphalt should be on fatigue cracking resistance and thermal cracking resistance. Han et al. [16] pointed out that the performance tests of graphene-modified asphalt mainly include viscoelasticity, high and low temperature stability, compatibility, electrical conductivity, self-healing and aging resistance; The performance tests of asphalt mixtures mainly include volume index and rutting resistance.
To sum up, some progress has been made in the field of graphene-modified asphalt. These studies focuses on the high and low temperature performance, aging performance and compatibility of asphalt. Studies have shown that graphene combines with asphalt as a physical action [14], and a large number of studies have been devoted to the problem of graphene-asphalt compatibility, where many properties of asphalt are improved under the premise of better compatibility. As for fatigue performance, a few studies have confirmed an improvement of the fatigue life of asphalt [11], combined with the lubricating properties of graphene [3] and the ability to increase the thermal conductivity of asphalt [4,5]. Based on these conclusions, this study has a certain theoretical basis. In the existing research, there are few reports on asphalt and asphalt mixture, but in the design and application of actual asphalt pavements, asphalt and mineral powder are combined into asphalt mastics, which acts as a filler and binder between the aggregates. Moreover, the existing research on the fatigue properties of graphene modified asphalt mostly focus on the fatigue properties of asphalt, and few research works on the fatigue properties of asphalt mastics. Therefore, this paper takes the asphalt mastics which are closer to practical application as the research object. The fatigue properties of basic asphalt mastics and GNPs-modified asphalt mastics were studied, and the changes of fatigue life and the influence of graphene on the fatigue life of asphalt mastics were compared. This paper enriches the study of graphene-modified asphalt.
Materials
In this paper, No. 70 Grade A asphalt was used, the performance indexes of which are shown in Table 1, and all tests were conducted in accordance with JTG E20-2011 [17] standard. In view of the agglomeration phenomenon of nanomaterials when they are compounded with asphalt, the surface-modified lipophilic graphene was prepared by the mechanical stripping method provided by a certain graphene technology company. The main performance indicators are shown in Table 2. The mineral powder was made from finely ground limestone, whose performance indexes are shown in Table 3, and all tests were conducted in accordance with the JTG E42-2005 [18] standard.
Sample Preparation
In our team's previous research, the best doping amount of graphene in base asphalt was obtained. The recommended doping amount of graphene is 0.35% of asphalt mass, which will be directly adopted in this study.
According to the preparation principle of the melt mixing method [19,20], the amount of GNPs added is 0.35% of the asphalt mass. Firstly, the asphalt is heated to a liquid state, then graphene is added and stirred uniformly, and then the GNPs modified asphalt is prepared by a high-speed shearing machine. The preparation process parameters are as follows: 4500 r/min high-speed shear for 30 min, and the temperature is controlled at 135 • C to 145 • C with electric heating device. Then we slowly added mineral powder with filler-asphalt ratio of 0.8, 1.0 and 1.2, and fully mixed this for 10 min, thus preparing graphene-modified asphalt mastics. The base asphalt mastics prepared with the same filler-asphalt ratio was used as the control group.
Fatigue Test
The NCHRP 9-10 project team in the United States using dynamic shear (DSR) time -sweep tests of asphalt under repeated shear loads to measure the fatigue performance of asphalt [1].
An American TA-DHR 2 dynamic shear rheometer (New Castle, DE, USA) was used as the testing instrument. The selection of filler-asphalt ratio is in accordance with the suggestion of the Technical Specification for Construction of Highway Asphalt Pavement (JTG F40-2004) [21]. The filler-asphalt ratio of common asphalt mixture should be controlled within the range of 0.8~1.2, and the base asphalt mastics and graphene-modified asphalt mastics with filler-asphalt ratio of 0.8, 1.0 and 1.2 were prepared. This was used to study the effect of mineral powder content on the fatigue properties of asphalt and the effect of the addition of graphene on the fatigue properties of asphalt mastics. The test samples are shown in Table 4.
Fatigue Evaluation Index
In the time-sweep mode using the DSR, the fatigue properties of asphalt were evaluated as follows: (1) N f50 : when the initial modulus (G * 0 ) decreases to 50%, it was judged as fatigue damage. (2) N δ : the inflection point of the phase angle (δ) curve was defined as fatigue damage [22][23][24][25].
The evaluation indexes of N f50 and N δ can be obtained directly from the experimental results, which is relatively easy [8].
The fatigue curve of typical asphalt mastic is shown in Figure 1. The evaluation index N f50 is defined as the loading time (T) corresponding to when the initial complex modulus (G * 0 ) decreases to 50% is defined as the fatigue life of asphalt mastics (s); the evaluation index N δ is defined as when the phase angle δ has an inflection point in the latter section, and the loading time (T) corresponding to the inflection point is defined as the fatigue life of asphalt mastics (s). After that, the data points are obviously dispersed, which is no longer desirable.
Test Parameters
The loading control mode is the stress control mode [26], and the detailed test parameters are shown in Table 5.
Test Parameters
The loading control mode is the stress control mode [26], and the detailed test parameters are shown in Table 5.
Fatigue Life of Asphalt Mastics
Under the stress control mode of 0.15 MPa and 0.2 MPa, fatigue tests were carried out on base asphalt mastics and GNPs-modified asphalt mastics with filling asphalt ratios of 0.8, 1.0 and 1.2, respectively; The fatigue life (T) of each asphalt mastics was listed in Table 6 by evaluating the results of the index N f50 and N δ collation tests. In order to make the results representative, the average value of the samples was taken as a representative value. As for the influence of the change of filler-asphalt ratio on the fatigue life of asphalt mastics, the results from Table 6 show that the fatigue life of two kinds of asphalt mastics increases with the increase of filler-asphalt ratio under two evaluation indexes and two stress modes. This indicates that in the range of 0.8 to 1.2 filler-asphalt ratio, the increase of mineral powder has an improving effect on the fatigue life of asphalt mastics.
According to the results in Table 6, the fatigue life curve of asphalt mastics is drawn as shown in Figure 2. It can be seen from the fitting equation in the figure that the correlation coefficient R 2 is above 0.95, which indicates that there is a good linear correlation between filler-asphalt ratio and fatigue life.
Effect of Graphene Nanoplatelets (GNPs) on Fatigue Properties of Asphalt Mastics
Based on the results in Table 6, the growth value (∆T) and growth rate (ω) of fatigue life of GNPs modified asphalt mastics relative to the base asphalt mastics were calculated for each filler-asphalt ratio, and the results are shown in Table 7; and the growth value curve of fatigue life of GNPs modified asphalt mastics was drawn, as shown in Figure 3. Table 7. Fatigue life growth value (∆T) and growth rate (ω) of graphene modified asphalt mastics.
Filler-Asphalt Ratio
Evaluating Indicator From Table 6, under a stress of 0.15 MPa, the average growth rate (ω) of fatigue life of GNPs modified asphalt mastics relative to base asphalt mastics was 17.7% at a filler-asphalt ratio of 0.8, 35.4% at 1.0, and 45.2% at 1.2; under a stress of 0.2 MPa, when the filler-asphalt ratio is 0.8, the average growth rate of fatigue life was 17.9%, 25.6% at 1.0 and 38.2% at 1.2.The results show that the addition of graphene can significantly improve the fatigue life of base asphalt mastics.
From Figure 3, it can be seen that the growth values (∆T) of fatigue life of GNPs modified asphalt mastics increase correspondingly with increasing filler-asphalt ratio, and the correlation coefficients R 2 are all greater than 0.95, indicating a good linear relationship between the filler-asphalt ratio and the growth values (∆T). It can also be seen that the slope of the growth values under 0.15 MPa stress is greater than that under 0.2 MPa stress, indicating that the fatigue life growth under low stress is greater than that under high stress.
Analysis of Fatigue Properties Mechanism of GNPs-Modified Asphalt
Load stress fatigue is the accumulation of unrecoverable bond strength attenuation of asphalt materials under repeated load, which eventually leads to fatigue crack damage [1,2]. From this point of view, the fatigue mechanism of GNPs-modified asphalt mastics was analyzed.
For both base asphalt mastics and GNPs modified asphalt mastics, an increase in the filler-to-asphalt ratio increases the fatigue life of both. This is due to the combination of the mineral particles with the asphalt, which are encapsulated in the asphalt, making the asphalt viscous and exhibiting an increase in the complex modulus, providing more energy to resist deformation and delaying the development of fatigue cracks in the asphalt when resisting external shear forces [27].
There are three reasons why GNPs-modified asphalt mastics have longer fatigue life than the base asphalt mastics: (1) Because GNPs is interwoven with asphalt, it has an interlayer adsorption effect, which makes the cohesion of asphalt stronger, enhances the toughness of asphalt, thus increasing the modulus of asphalt. When subjected to external shear forces, due to the interlayer adsorption effect of GNPs, a part of the shear stress will be buffered [8,28]. (2) The excellent lubricating performance of GNPs; because the lubricating effect of GNPs between particles in mastics will appear in the process of cyclic shearing [3], the relative position between particles will change, and the particles that were originally in close contact will be at a certain distance. This reduces the damage between the asphalt molecules and the mineral powder particles between the asphalt molecules themselves, and between the mineral powder particles themselves due to shear friction. (3) The super thermal conductivity of GNPs can transfer heat [4,5], which reduces the heat concentration in asphalt to a certain extent, thus reducing the accelerated damage of asphalt caused by heat generated by friction.
As filler-asphalt ratio increases, the fatigue life growth rate of GNPs modified asphalt also increases correspondingly, because as the increase of mineral powder ratio, the proportion of asphalt decreases, the internal friction base of asphalt increases relatively, and the lubrication effect of GNPs on reducing shear friction is more prominent.
The anti-fatigue mechanism of GNPs-modified asphalt mastics is shown in Figure 4.
damage of asphalt caused by heat generated by friction.
As filler-asphalt ratio increases, the fatigue life growth rate of GNPs modified asphalt also increases correspondingly, because as the increase of mineral powder ratio, the proportion of asphalt decreases, the internal friction base of asphalt increases relatively, and the lubrication effect of GNPs on reducing shear friction is more prominent.
The anti-fatigue mechanism of GNPs-modified asphalt mastics is shown in Figure 4.
Conclusions
In this paper, GNPs-modified asphalt mastics and base asphalt mastics were prepared, with filler-asphalt ratios of 0. Under the stress of 0.15 MPa, the average growth rate is 17.7% when the filler-asphalt ratio is 0.8, 35.4% when the filler-asphalt ratio is 1.0 and 45.2% when the filler-asphalt ratio is 1.2; Under the stress of 0.2 MPa, the average growth rate is 17.9% when fillerasphalt ratio is 0.8, 25.6% when filler-asphalt ratio is 1.0 and 38.2% when filler-asphalt ratio is 1.2. (3) With the increase of filler-asphalt ratio, the fatigue life growth values (∆T) of GNPs modified asphalt mastics also increase correspondingly, and the correlation coefficients R 2 is greater than 0.95, which shows that the growth value has a good linear relationship with the filler-asphalt ratio. (4) On one hand, the fatigue modification mechanism of asphalt mastics modified by GNPs is the interaction between GNPs and asphalt; on the other hand, it is the result of the lubricity and thermal conductivity of graphene itself.
In general, the fatigue resistance of GNPs-modified asphalt mastics is better than that of base asphalt mastics. In the range of commonly used filler-asphalt ratios (0.8-1.2), the higher the filler-asphalt ratio, the more significant the performance of GNPs in improving fatigue life. This research has practical application value to the highway construction industry. | 4,743.2 | 2021-08-27T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Three-Dimensional Microstructural Characterization of Cast Iron Alloys for Numerical Analyses
In this paper, we aim at characterizing three different cast iron alloys and their microstructural features, namely lamellar, compacted and nodular graphite iron. The characterization of microscopic features is essential for the development of methods to optimize the behavior of cast iron alloys; e.g. maximize thermal dissipation and/or maximize ductility while maintaining strength. The variation of these properties is commonly analyzed by metallography on two-dimensional representations of the alloy. However, more precise estimates of the morphologies and material characteristics is obtained by three-dimensional reconstruction of microstructures. The use of X-ray microtomography provides an excellent tool to generate high resolution three-dimensional microstructure images. The characteristics of the graphite constituent in the microstructure, including the size, shape and connectivity, were analyzed for the different cast iron alloys. It was observed that the lamellar and compacted graphite iron alloys have relatively large connected graphite morphologies, as opposed to ductile iron where the graphite is present as nodules. The results of the characterization for the different alloys were ultimately used to generate finite element models.
Introduction
In the beginning, cast iron was only known as white or gray. The invention of the microscope in 1860 opened up for an improved understanding of the microstructure of cast iron. Ductile iron, also known as nodular iron, was independently invented by Adey, Millis and Morrogh just before 1940. During the 1970s, compacted graphite iron was developed and the major effect of the alloying elements magnesium and cerium was understood [1]. Later studies also clarified the influence of oxygen, where decreasing oxygen content in the melt caused transformation from flake graphite to compacted and eventually to spheroidal shape graphite [2].
In most cases the graphite is part of a eutectic reaction following the primary growth of austenite, and for example for ductile iron, the eutectic reaction is highly divorced. Due to the nature of the phase transformations occurring post solidification, the study of the complete sequence of solidification is difficult. Direct austempering is necessary to reveal the primary austenite, but this will affect solid-state transformation effect on the precipitation of graphite [3] [4].
When casting compacted and ductile iron, deviation from the intended graphite shape is common and ductile iron is defined as a material with more than 80 % spherical nodules. Compacted graphite is defined as a material with less than 20 % spherical nodules. The exact nature of the growth of graphite is still under debate following Herfurth [5] who suggested that the transition from lamellar to spheroidal graphite was due to the change of ratio of the growth rate of the [1 0 1 0] face and that of the [0 0 0 1] face. Sadocha and Gruzleski [6] suggested circumferential growth of graphite spheroids, while later studies propose growth which is more in the shape of platelets stacked together [7]. The fact that the different morphologies can occur simultaneously in the microstructure makes it difficult to interpret the precipitation sequence in the eutectic through standard cross-sectioning as shape and distribution of the different morphologies cannot be visualized in a two dimensional cross-section.
The development of methods to describe the local microstructural features of different cast iron alloys has been the focus of many studies in literature [8,9,10,11]. Most of these studies are intended to quantify for example volume fractions of different constituents, nodularity, mean curvatures etc. All of these properties will aid in determining what type of grade the supposed buyer will obtain from the supplier. Velischko et al. [8] developed an analytical method to determine the grade of cast iron. The method is based on training an algorithm to recognize the graphite morphology by image analysis. By use of a database of binary images that covers known grades of cast iron alloys which have been thoroughly analyzed by experts in the field, the algorithm can search for similarities and thus determine the grade. This approach limits the determination to 2D binary images. With the large spectrum of graphite morphologies observed in two dimensions, it is obviously interesting to capture and analyze the microstructure in three dimensions. The Focused Ion Beam (FIB) method has therefore been applied in recent years. The method "deep-etches" the cast iron specimen by basically slicing it to generate a free surface. The free surface is analyzed by scanning electron microscopy (SEM) to generate 2D images which are collected and the resulting stack of images is analyzed by a software to generate the 3D microstructure [9]. The method is mainly used to analyze small variations in the microstructure. It is, however, cumbersome to study larger volumes by this technique. Volumes are generally smaller than 1 mm 3 . It is thus apparent that the resolution obtained by FIB is probably the best available today to generate 3D microstructures of cast iron alloys. Another promising method to study 3D microstructures is X-ray microtomography or µ−CT. With this method it is possible to establish the morphology and volume fraction of different phases, i.e. ferrite, pearlite and graphite [12].
In the present work, we aim to study the 3D microstructures of Spherical Graphite Iron (SGI), Compacted Graphite Iron (CGI) and Lamellar Graphite Iron (LGI) with the use of µ−CT combined with images analysis, ultimately generating Finite Element Models of the microstructures.
X-Ray Microtomography
X-ray tomograms were collected using a Nikon XT H 225 CT scanner, employing a W reflection target with power settings for the source of 135 kV acceleration voltage and 110 mA filament current, respectively. The samples were machined as cylindrical rods with diameter 1 mm, allowing them to be placed at a minimum distance from the source such that the upper nominal resolution limit of about 1 µm voxel size could be obtained.
For the cast iron samples, the main purpose of the tomography analysis was to characterize the different constituents in terms of distributions, sizes and morphologies, and accordingly the sample dimensions and the experimental settings had to be optimized to achieve maximum spatial resolution with a sample volume large enough to be statistically representative for the phases of interest. Figure 1 shows slices reconstructed from the tomograms of three different cast iron alloys. As can be observed in Figure 1, it is quite difficult to differentiate the phases. Thus, it is recommended that an image analysis is done prior to any segmentation in order to distinguish between different phases in the microstructure. Figure 2 shows an example of how for example
428
Science and Processing of Cast Iron XI probability density analysis can be used to analyze individual radiographs for a SGI microstructure. For comparison, the original radiograph is also shown. Obviously, there is a multitude of image analysis techniques available, however, here we merely point out, as mentioned above, that it is most often necessary to perform an image analysis before any segmentation of phases begin. Note that the ferrite phase surrounding the graphite becomes visible in the processed image, see Figure 2. The bit depth of the radiographs is also a factor to consider since, for the case of 8-bit pictures, the image intensity is limited to values between 0-255, whereas for the 16-bit depth images it lies between 0-4095. Thus, details in intensity shift can be captured more easily for the 16-bit images.
Microstructures
Once the images have been pre-analyzed using image analysis software, each image is segmented to establish the extent of each microstructural phase. Interpolation of the segmented images was performed to yield three-dimensional representations of the microstructures, see e.g. [13] for a fairly recent study on three-dimensional analysis using µ−CT. Depending on the complexity of the microstructural features, the generated stereolithographic representations, i.e. STL-format files, will typically have differently sized facets. However, in finite element applications it is favorable to have the same size for all finite elements and specifically for dynamic simulations. Thus, extensive pre-processing is necessary in order to generate finite element models.
Spherical graphite iron
Known for its excellent castability, high strength and ductility, SGI is used in many applications in industry. The microstructure is characterized by spherical nodules of graphite, where a particular alloy will for example have a specific so-called nodularity that can roughly range between 65-85% depending on the shape factor used. Figure 1 (left) shows a typical pearlitic-ferritic SGI radiograph. Obviously, the nodules will appear to be of different size due to the particular cut. Thus, from 2D radiographs it is tricky to estimate the size distribution of nodules. However, analysis of a µ−CT scan reveals the actual size of individual nodules and it is thus straight forward to establish size distributions. Another characteristic feature that is interesting to analyze is the nearest neighbor distance between nodules. This has been shown in literature to be directly linked to the strength of the particular SGI alloy. Again, this is a feature that can be determined in 3D analyses rather than 2D approximations. Some of these features will be presented in the result section of the current manuscript.
Compacted graphite iron
Owing to the development of foundry and manufacturing technology in the late 1990's, CGI has been used in series production of primarily diesel engine components, such as cylinder heads, for about 20 years. The primary benefits of CGI compared to grey iron, i.e. LGI, or aluminum is its higher strength and fatigue resistance properties. These properties are important due to the increasing demand for lighter vehicles in order to reduce fuel consumption while maintaining or even increasing power of the engines [14].
Materials Science Forum Vol. 925 429
The microstructure is characterized by a certain amount of nodularity, about 10-20 % and its vermicular shaped graphite inclusions (see Figure 1 (middle)) while the matrix is preferably pearlitic.
Lamellar graphite iron
One of the disadvantages of LGI has been covered briefly above, and it is its lower strength as compared to SGI and CGI. However, due to inexpensive production and excellent heat conducting properties and vibration damping ability, LGI is used frequently in industry. Moreover, LGI is used for contact sliding condition components such as for example piston rings [15]. The graphite flake morphology is observed to be self-lubricating under certain sliding conditions.
The nodularity for LGI is 0 % and the graphite inclusions are in form of large graphite flakes or lamellas, which is the reason for the naming convention in literature, i.e. flake graphite iron (FGI), lamellar graphite iron (LGI) or grey iron.
Results and Discussion
Structural properties of cast irons are determined by their microstructure. It is therefore essential to be able to correctly characterize microstructural features. For example, characteristics such as nodule size, nodularity and volume all have to do with ultimate tensile strength properties. For SGI, which is also referred to as ductile iron, the microstructure is characterized by spherical inclusions in a matrix that is constituted by pearlite and ferrite as well as fully ferritic ductile iron. When the microstructure is loaded, the graphite inclusions act similar to voids and stress concentrations build up in the graphite/matrix interfaces. Consequently, if graphite inclusions are modelled as voids, the external work will dissipate in terms of plastic dissipation and fracture energy of the matrix phase and not the graphite. The apparent difference in strength between the SGI, CGI and LGI is due to graphite morphology. The relatively small round graphite inclusions in SGI compared to vermicular shaped inclusions, with some fraction of spherical nodules for CGI, and the flake graphite in LGI behaves differently under loading conditions. Relatively large graphite inclusions will deform under loading and the majority of the external work energy will, thus, dissipate in terms of fracture energy of the graphite as well and as a result lead to lower strength and ductility. The focus of the present study is to characterize graphite morphologies for SGI, CGI, LGI and relate it to results obtained from finite element simulations. Figure 3 shows a frequency plot of the nodule size for the SGI microstructure, originating from the X-ray tomography. For clarity, the nodules were idealized as perfect spheres. It is seen that it is most common with nodules having a surface area of about 0.0025 mm 2 and that there is a number of nodules that have a larger area.
430
Science and Processing of Cast Iron XI In the present study, the representative volume elements (RVE) are about 0.7x0.7x0.7 mm 3 and as is seen in Figure 3, there is a slight increase of nodules having a surface area of about 0.015 mm 2 which corresponds to a nodule diameter of about 35 µm. Thus, defining 35 µm as a large spherical inclusion (e.g. nodule 51), the following dendrogram, see Figure 4 (left), can be constructed. Smaller nodules than this have been left out, see Figure 4 (right). It shows that there are three cluster formations in the current representative volume for the SGI with a couple of outliers as well, namely nodules 3, 4 and 135 and the distance between nodules in the clusters is approximately 100 µm. The localization of strain for the SGI graphite clusters is similar to the effect larger graphite inclusions have on the CGI and the LGI microstructures analyzed, namely, that the graphite has a ductile effect and essentially allows the microstructure to deform. In fact, knowledge about the number of inclusions, the total surface area and the volume fraction yields enough information to draw conclusions about the shape of the graphite inclusions. Table 1 shows this data for the analyzed microstructures. The larger volume fraction of graphite for the SGI and the smaller surface area suggest that the individual shapes must be voluminous, whereas for the CGI and LGI microstructures, the area is larger, but the volume fraction is lower. This suggests that the graphite inclusions must be less voluminous, i.e. vermicular or flakes. To illustrate this conclusion, Figure 5 shows a histogram of the number of CGI graphite inclusions as function of the surface area. Like for the SGI, the graphite inclusions have here been idealized as spheres by calculating the radius from the individual surface areas. It is apparent that the graphite inclusions need to be less voluminous in order to uphold the volume fraction criteria. The distance between the inclusions is also roughly the same for all inclusions as shown in the dendrogram in Figure 6. It should be noted that out of the approximately 230 graphite inclusions for the CGI microstructure, only the ones that have a surface area exceeding 0.015 mm 2 are shown in Figure 6. The nodularity for CGI microstructures varies between 0-20 % and the analyzed CGI RVE contains spherical nodules as can be seen in Figure 5 (right). Similar to the CGI microstructure, the LGI microstructure has large interconnected flake-like graphite inclusions, but 0 % nodularity. Nevertheless, for the purpose of comparison, the inclusions are idealized as spheres in the same manner as before. For the present LGI microstructure, there are approximately 80 inclusions in total. The number of inclusions larger than 0.015 mm 2 are fewer than 20, where the biggest interconnected inclusion has a surface area of 2.827 mm 2 , see Figure 7.
Here, the largest inclusion is an outlier with a fairly large distance to the surrounding inclusions. In fact, since the RVE is cut from the specimen scan, the surface area of the largest inclusion is actually larger. Relating the cluster formations observed in the SGI microstructure to the evolution of plastic strain and for a specific load case, it is observed that the clusters localize the plastic strain, see Figure 8. This effect is not as prominent for the other two microstructures due to the aforementioned homogenized ductility given by the larger graphite inclusions as can be seen in the two lower contour plots in Figure 8. and LGI (lower right) microstructures.
Conclusions
Three different cast iron microstructures have been characterized based on the graphite morphology alone. It has been shown that total surface area and volume fraction of the graphite unambiguously correlates with the suspected shape of individual graphite inclusions. Furthermore, the use of dendrograms provide an excellent method for the analysis of cluster formation in particular for SGI microstructures due to the spheroidal nature of the graphite inclusions. As a final conclusion, the use of µ−CT to analyze cast iron microstructures has been proven to be an outstanding approach to generate finite element models that can be used in several studies to come. | 3,816.4 | 2018-06-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Accounting for species interactions is necessary for predicting how arctic arthropod communities respond to climate change
1 –––––––––––––––––––––––––––––––––––––––– © 2021 The Authors. Ecography published by John Wiley & Sons Ltd on behalf of Nordic Society Oikos This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Subject Editor: Eric Post Editor-in-Chief: Miguel Araújo Accepted 19 February 2021 44: 1–12, 2021 doi: 10.1111/ecog.05547 44 1–12
Introduction
A central goal of current biodiversity research is to better understand and predict biodiversity response to climate change. Most research on how climate change will affect future ecological communities has focused on the link between the abiotic changes caused by climate warming and subsequent changes in species distributions and abundances (Pereira et al. 2010, Pacifici et al. 2015. Species extinctions and changes in species' distribution and abundance are currently modifying the composition and functioning of ecological communities (Parmesan 2006, Chen et al. 2011, Blowes et al. 2019. However, the ongoing changes are modulated not only by the direct impacts of changing climatic conditions on individual species but also indirectly, through Accounting for species interactions is necessary for predicting how arctic arthropod communities respond to climate change cascading effects of species interactions (Petchey et al. 1999, Memmott et al. 2007, Tylianakis et al. 2008, Harley 2011. Cascade effects follow when the abundance of a species in a local community changes in response to e.g. climate warming, influencing the occurrences or abundances of other species it interacts with. Therefore, the joint assessment of climate and interspecific interactions is of fundamental importance for improving our current understanding and our predictions of how climate change affects ecological communities (Gilman et al. 2010, Van der Putten et al. 2010, Blois et al. 2013, Scheffers et al. 2016.
In the Arctic, the rate of temperature increase is nearly double the global average (Kattsov et al. 2005, IPCC 2014, Post et al. 2019. This is dramatically changing abiotic conditions, with important impacts on phenology, especially at the onset of the growing season (Høye et al. 2007, Wheeler et al. 2015, Kankaanpää et al. 2018. As a consequence, plants and animals adapted to longer growing seasons as well as species originating from lower latitudes, or from warmer locations nearby, are becoming more abundant (Sturm et al. 2001, Post et al. 2009, Elmendorf et al. 2012, Elmhagen et al. 2015. Conversely, two types of species are predicted to decline in particular: species adapted to the short but intense growing season of the Arctic, and species sliding out of synchrony with their key resources Forchhammer 2008, Hegland et al. 2009). Related to the latter, the effects of climate warming on arctic communities are intensified by their impacts on species interactions. Changes in phenology can decouple interacting species, causing 'phenological mismatch' (Renner and Zohner 2018), and such mismatches can have serious consequences, including population declines of herbivores Forchhammer 2008, Ross et al. 2017), specialist predators (Gilg et al. 2009, Schmidt et al. 2012a) and pollinators (Høye et al. 2013, Schmidt et al. 2017, or -mutatis mutandis -the escape of herbivores from enemies and increased plant damage (Kankaanpää et al. 2020(Kankaanpää et al. , 2021. Arctic terrestrial interaction webs are dominated by arthropods (Wirta et al. 2015(Wirta et al. , 2016. As recently uncovered by molecular tools, the diversity of arctic arthropods exceeds what was previously assumed (Wirta et al. 2015(Wirta et al. , 2016. The short lifespans, strong functional responses to temperature and narrow phenological niches of arthropods make them particularly responsive to warming (Deutsch et al. 2008). Indeed, the few studies of temporal trends in population abundance already suggest that arctic arthropod communities are experiencing severe changes (Gillespie et al. 2019, Kankaanpää et al. 2020. Among soil and foliage-dwelling arctic arthropods, herbivores (Hemiptera and Lepidoptera) and parasitoids (Hymenoptera) have increased locally in abundance, whereas detritivores (Collembola and Acari) have decreased (Koltz et al. 2018a). While spider communities (Araneae) seem not to have experienced drastic changes overall, individual species have declined in abundance (Bowden et al. 2018). Among flower-visiting arctic arthropods, muscid flies (Diptera) have drastically decreased in abundance (Loboda et al. 2018). Also the abundances of aquatic arthropods such as chironomids (Diptera) are changing, with abundances predicted to increase with rising temperatures (Engels et al. 2020, but see Høye et al. 2013).
Some of the most frequent trophic interactions among arctic arthropods involve spiders feeding on herbivorous arthropods (Wirta et al. 2015, Eitzinger et al. 2019, and parasitoid wasps and flies feeding on lepidopteran and dipteran larvae (Várkonyi andRoslin 2013, Kankaanpää et al. 2020). Given these strong trophic interactions, climateinduced change in arthropod diversity and abundance can be expected to cascade through these high-latitude communities (Koltz et al. 2018b, but see Visakorpi et al. 2015). Furthermore, changes in arthropod communities can have drastic consequences on ecosystem functioning (Schmidt et al. 2017). For instance, a large proportion of tundra plants is insect-pollinated (Kevan 1972), with muscid flies identified as the main pollinators of some dominant plant species (Gillespie et al. 2016, Tiusanen et al. 2016. Also, while plant consumption by arctic arthropods is rather low , it is still of the same magnitude as herbivory by large mammals (Mosbacher et al. 2016). Thus, predicted outbreaks of herbivore arthropods in the Arctic can have important consequences for plant consumption (Jepsen et al. 2008) and ecosystem primary productivity (Lund et al. 2017). Finally, arthropods are the main food resource for many migratory birds during their breeding season, and changes in arthropod availability may thus affect arctic bird reproduction and growth (Meltofte et al. 2007, van Gils et al. 2016. Despite this evidence for pervasive trophic interactions among arctic arthropods, coupled with the high sensitivity of arthropod population dynamics to climate change, the role of trophic interactions to their responses to global climate change is still largely unknown. Here we aim to fill this knowledge gap by asking whether we can use information on species interactions to improve our predictions of arctic-arthropod population dynamics. For this purpose, we use a 14-year-long time series of arthropod communities from Zackenberg, Northeast Greenland collected by BioBasis (Schmidt et al. 2012b) as part of the Greenland Ecosystem Monitoring (GEM) program (<https://data.g-em.dk/>, accessed 9 Aug 2020). The time series consists of physical samples of all trophic levels within the local arthropod community. While most specimens have previously been classified to higher taxonomic ranks only (typically families, with species-level identifications available for only a minority of the specimens; Schmidt et al. 2016 for exact categories), in this paper we achieve species-level resolution for all samples by applying a mitogenomic-mapping pipeline that we recently developed (Ji et al. 2020). To elucidate the impact of species interactions on the population dynamics of arthropods, we compared the predictive performances of four alternative joint species distribution models, which differed in whether and how interactions were taken into account (Fig. 1).
Study system and sampling methods
The data were collected as part of the GEM program in Zackenberg, a High Arctic site located in Northeast Greenland (74°28′ N, 20°34′ W). Zackenberg is characterized by a continental climate with mean monthly temperatures ranging from −20 to +7°C and an annual precipitation of 260 mm. The vegetation consists of tundra species, of which the arctic willow Salix arctica, the arctic bell-heather Cassiope tetragona and the mountain avens Dryas spp. are some of the most abundant. The growing season has lengthened by ca 50% during the study period, from a snow-free period lasting from end-June to end-August in the late 1990s to a period lasting from mid-June to the beginning of September in the 2010s (Kankaanpää et al. 2018).
Arthropods were sampled weekly during the growing season from 1997 to 2013, using yellow-pitfall traps located in a mesic heath habitat. While the original number of traps monitored by the BioBasis program is larger, we focused on traps from one of the six still-operational arthropod monitoring sites within the larger landscape (Ji et al. 2020). Specifically, the material derives from three of four traps (A-C) collected at trapping station ART3 (for maps and details, Schmidt et al. 2012b), the traps being located 5 m from each other. Unfortunately, the 2010 samples were lost in transit from Greenland, so data from this year are missing.
Sample processing, sequencing and mitogenomic mapping
In a previous methods publication (Ji et al. 2020), we described the SPIKEPIPE mitogenome-mapping pipeline, which converts arthropod bulk samples into data on species presence-absence and DNA-abundance. Below, we briefly summarize the steps performed by Ji et al. (2020), followed by a description of how we supplemented the data matrices generated by Ji et al. (2020) with information from other sources. More detailed descriptions of sample processing, sequencing and bioinformatics are in the Supporting information and Ji et al. (2020).
The arthropod samples were preserved in ethanol at room temperature in the Museum of Natural History, Aarhus, Denmark. Over the years, the samples have been sorted to subsamples of higher taxonomic levels (typically families). In 2016 and 2017, we non-destructively extracted genomic DNA from these subsamples following a modified procedure from Gilbert et al. (2007), pooled them into the original trap-week samples, added a fixed-aliquot DNA-spike as a standard, shotgun-sequenced the DNA from each sample, and mapped the output reads to a mitogenome reference database. Each trap-week sample was individually libraryprepped and sequenced at the Earlham Institute in Norwich, UK. The mitogenome reference library was constructed from the voucher collection established by Wirta et al. (2016). We used the DNA-spike in each sample, technical replicates across sequencing runs, and a mock-sample-estimated Figure 1. Conceptual illustration of the four time-series models compared in this study. Left. Species data are included in the statistical models as presence-absence of species in a particular trap-week (w) of a particular year (Y) (when used as a response variable, top), and as abundance measured by the proportion of trap-weeks during which the species was recorded in a previous year or absolute abundance measured by DNA (when used as a predictor, bottom). Data on climatic variables (mean air temperature and snow depth) are included as predictors (top). Right. The models of increasing complexity assume that the temporal dynamics of species communities are determined by an increasing number of factors: 1) abiotic environmental variables (sampling week temperature and previous summer temperature or previous winter snow depth) only (abiotic model), 2) abiotic environmental variables + intra-specific interactions (density dependent model), and abiotic environmental variables + intra-specific interactions + inter-specific interactions, the latter structured either 3) at the species level (species-interactions model) or 4) at the trophic-group level (trophic-interactions model). For simplicity, in this figure the models are illustrated from the point of view of predicting the dynamics of one focal species, whereas in the joint species distribution models, all species in the community data are simultaneously focal species. threshold for minimum-acceptable mapping-coverage to achieve accurate data on species presence-absence as well as DNA-abundance.
Community data
The species data matrix derived from mitogenome mapping was complemented with data from two other sources. First, samples from the earliest and latest parts of the summer often comprised only a few individuals. For these, we DNAbarcoded each individual with PCR primers LCO1490 and HCO2198 (Folmer et al. 1994), identified them to species using the BOLD database (Ratnasingham and Hebert 2007), and added their occurrences to our data matrix. These DNAbarcode data include only the species also detected by mitogenome mapping. Second, we added data on Diptera species from subsamples that had been identified in an independent morphological study (Loboda et al. 2018) but had not yet been returned to the collection, and which thus could not be sequenced. This added three new Diptera species to the data, as well as new occurrences of species already detected by the mitogenomic sequencing. Altogether, the community data compiled consisted of 81 arthropod species identified in 542 trap-week samples. Given that this dataset represents arthropods co-occurring in space (trap) and time (week), we consider the species found in the trap-week samples to belong to the same community (Fauth et al. 1996).
To include functional information in the form of the trophic levels represented in the community data, each species was classified by its non-adult feeding type: herbivore (9 species), omnivore (1 species), parasitoid (18 species), predator (25 species) or species that were either saprophages or detritivores (28 species). These classifications are provided in the Supporting information.
Climate data
From GEM's ClimateBasis program, we downloaded average hourly soil and air temperatures (the latter measured 2 m aboveground), as well as three-hourly data on snow depth, spanning the study period. As the soil and air temperatures were highly correlated (Pearson correlation r = 0.90), only airtemperature data were used. We calculated from these data three climatic predictors: 1) the mean temperature during the week at which the arthropod sampling event was conducted (henceforth called sampling week temperature); 2) the mean temperature during the previous summer (averaged over weeks 23-32; henceforth called previous summer temperature); and 3) the mean snow depth during the previous winter (averaged over weeks 33-52 and 1-22; henceforth called previous winter snow depth). The sampling week temperature predictor was aimed to capture insect activity and thus detectability (Høye and Forchhammer 2008), whereas the previous-year summer temperature and previous-year winter snow depth predictors were aimed to capture the influence of climatic factors on population growth rate. Previous summer temperature has been found to affect arthropod population growth rates (Koltz et al. 2018a), while snow depth is associated with the timing of snow melt, which affects plant phenology and consequently arthropod population dynamics (Høye et al. 2013).
Analyses of species richness
As a measure of species richness, we used the yearly averages of the number of species identified per trap-week sample. We used linear regression to quantify trend in species richness a function of year. To examine whether the trend differed among trophic groups, we repeated the analysis separately for each trophic group. We further examined if there has been a systematic trend in temperature by regressing temperature against the linear effect of year. To account for possible temporal autocorrelation in the model residuals, we supplemented the baseline linear model analyses with an ARIMA (autoregressive integrated moving average) model. We implemented the linear model with the 'lm' function in R (<www.r-project. org>) and assessed the significance of the linear trend by the p-value. We implemented the ARIMA model with the 'arima' function in R and assessed the significance of the linear trend by the comparison of AIC values between model variants that included and excluded the linear trend of the year.
Analyses of population dynamics
To identify the drivers of population dynamics, we applied a joint species distribution model (Warton et al. 2015) via the hierarchical modelling of species communities (HMSC) (Ovaskainen et al. 2017b, Ovaskainen and Abrego 2020), using a time-series approach similar to that of Ovaskainen et al. (2017a). While the data were generated at a weekly resolution, our main interest was in the yearly transitions, as most arctic arthropods have an annual or multiannual life cycle, and thus the population dynamics of most of the arthropod species in the dataset take place on an annual basis. Thus, the predictor variables of previous-year species abundances and previous year climatic conditions were defined as yearspecific averages (Fig. 1). The response variable was however measured at the resolution of species-trap-week, as this allowed us to account for both phenological variation and the influence of weather conditions on insect activity and thus detectability (Høye and Forchhammer 2008).
Concerning species abundances in the previous year, we considered two different predictors: 'prevalence-based abundance' and 'DNA-based abundance' (Fig. 1). Prevalencebased abundance was defined as the fraction of trap-week sampling units occupied by a given species over the previous year. We considered this measure to be an ecologically relevant proxy of species abundance, because if a species is consistently present in many samples from a given year, then this species is likely to be more available for species interactions. While the DNA abundance is in theory more informative than merely prevalence, it includes species-specific biases (e.g. mitochondrial copy number, individual biomass), for which reason we scaled the estimates of Ji et al. (2020) to zero mean and unit variance within each species. The DNA abundance can also be expected to be more prone to random variation due to e.g. outliers generated by an exceptionally large DNA yield. Further, for part of the data, DNA abundance could not be recovered (i.e. for Sanger sequenced and non-sequenced individuals), so these samples represent missing data for DNA abundance.
We fitted a probit-model where the response variable was the vector of presences and absences of all species in a particular trap in a particular week in a particular year. In all models, we included as predictors the linear and second order effects of the week (to model species phenology), and the sampling week average temperature to capture variation in thermal conditions that influence insect activity and thus detectability (Høye and Forchhammer 2008). Furthermore, we included either previous-year summer temperature or previous-year winter snow depth as the climatic predictor influencing insect growth rate, either by directly affecting insect thermal tolerances and/or vegetation phenology (Høye et al. 2013, Nabe-Nielsen et al. 2017, Kankaanpää et al. 2018, Koltz et al. 2018a). The reason for fitting alternative models rather than including both predictors into the same model was that the time series contained in total 14 yearly transitions only, and thus it was not possible to include a large-number of year-specific predictors. The abiotic model included no additional predictors. In the density-dependent model, we added as a predictor the abundance of only the focal species in the previous year (Fig. 1). In the trophic-interactions model, we likewise included the abundance of the focal species in the previous year, as well as four additional predictors measuring the abundances of herbivores, parasitoids, predators and saprophages and detritivores in the previous year, each averaged over the species within each trophic group. In the species-interactions model, we included as predictors the species-specific abundances of occupied trap-weeks in the previous year for all species, thus modelling both within-and among-species density dependence at the species level.
To account for the spatial and temporal non-independence of the sampling units, we included trap (three levels of trap A, B and C) and year (14 levels) as random effects in all models. To yield community-level inference, we further added a hierarchical model structure indicating the trophic group of the species. In the HMSC framework, traits are used as predictors for how the species respond to fixed effects (Ovaskainen et al. 2017b).
In the HMSC analyses, we included only the 52 species that were present in at least 10 trap-week samples (Supporting information). Furthermore, while the entire data consisted of 542 trap-week samples, in the HMSC analyses we included only 467 samples, as we needed to exclude data from the year 2011, for which we could not compute the biotic predictors since the data from 2010 were missing. We fitted the models using the R-package Hmsc 3.0 (Tikhonov et al. 2020).
In the species interaction model we specifically avoided over-parameterization by applying the so called 'sparse interaction model' of Ovaskainen et al. (2017a). In this approach, the level of variable selection is tuned by choosing the prior probability by which each variable (in this case the abundance of a specific species in the previous year) is included in the model, i.e. by which the regression coefficient is different from zero. To test the sensitivity of the results with respect to this choice, we varied the prior probability from 0.01 to 0.1, 0.5 and 1.0. We then chose the model with the highest predictive performance, i.e. the one that was best able to capture the signal while avoiding overfitting to the noise. We evaluated the level of over-parameterization by comparing the explanatory power (predictions based on models fitted to all data) with the predictive power (predictions based on a cross-validation approach. For technical details about model fitting and model comparison, see Supporting information.
Results
Over the study period, average summer temperature at Zackenberg increased by 2.0°C (from 1.28°C to 3.28°C, p = 0.012 in the linear model and deltaAIC = 5.4 in the ARIMA model, Fig. 2a), and arthropod species richness drastically decreased by 50% (from a yearly mean of 8.9 species to 4.4 species per trap-week, p = 0.003 for the linear trend, Fig. 2b). The overall species richness increased to unusually high level in 1999, then crashed during 2000-2001, after which it first partially recovered and then continued decreasing until the end of the study period (Fig. 2b). Changes in species richness were group-specific, indicating that the food-web structure changed over time (Fig. 2c-f ). Predators and saprophages/detritivores decreased the most steeply over the study period, whereas for herbivores and parasitoids, the dynamics show a fluctuating pattern, with no overall trend. Since most parasitoids are in the family Ichneumonidae, most predators in Muscidae and most detritivores in Chironomidae (Supporting information), trends in these families match trends at the level of their corresponding trophic-group (Supporting information).
Weekly variation in presence-absence was satisfactorily predicted by all models, as shown by the mean AUC value over the species ranging from 0.82 to 0.88 for all model variants. However, our focus was at a yearly resolution, i.e. at assessing how well the models were able to predict in which years species abundances were high or low. At this level, the explanatory powers of the models, measured as the correlation between predicted and observed yearly abundances, ranged from 0.61 to 0.99 (Table 1). As may be expected, the explanatory powers generally increased with increasing model complexity, since more complex models have more parameters that can be estimated. To account for this potential effect of overfitting, we evaluated the predictive powers of the models through a cross-validation procedure where we masked the data from each year for which a prediction was made. Compared to the explanatory powers, the predictive powers were much lower (Table 1), highlighting the difficulty of ecological prediction. The predictive powers of the abiotic models were even negative, reflecting the fact that when excluding a focal year from the model fitting, the predictions on populations abundances may be opposite to reality (i.e. predicted to be high in a year that were low in reality, and vice versa).
When we added interspecific interactions, either at the level of pairs of trophic groups or pairs of species, predictive power turned from negative to positive (Table 1). The highest predictive powers were achieved by the pairwise species-interaction model, which estimates the full pairwise species-to-species interaction matrix of the yearly transitions. In terms of how species abundance in the previous year was measured, species prevalence turned out to be a better predictor for cross-validation than DNA abundance for all model variants, for which reason we present below results only for the model using prevalence-based abundance. Table 1. Explanatory and predictive powers of the four joint species distribution models varying in their abiotic predictors (previous summer temperature and previous winter snow depth). We measured the yearly explanatory power (r) by calculating the Spearman rank correlation between the predicted and observed arthropod occurrences, both averaged into yearly prevalences. The yearly predictive power (CV-r) was measured by year-based cross-validation, i.e. by assessing how well the models fitted to data excluding each focal year were able to predict the arthropod communities. Note that all the models include also the explanatory variables of sampling week temperature and effect of the week. We measured the weekly explanatory power (AUC) by calculating the area under the curve between the predicted and observed arthropod occurrences. The weekly predictive power (CV-AUC) was measured by 10-fold cross-validation. The results shown here are based on prevalence-based abundance as the predictor, and the results for the models using DNA-based abundance are given in the Supporting information.
Model
Previous
891
The results for DNA-abundance are shown in the Supporting information. For the abiotic model, the model variant including previous summer temperature as the climatic predictor performed better (less negative predictive power) than the variant including previous winter snow depth (Table 1). Interestingly, the model which achieved highest predictive power was the species interaction model which included the previous winter snow depth as climatic predictor. For the model variant including previous summer temperature as the predictor, accounting for species interactions increased the predictive power by 0.18 (from −0.08 to 0.1). For the model variant including previous winter snow depth, accounting for species interactions increased the predictive power much more, by 0.36 (from −0.20 to 0.16). This result suggests that species abundances in the previous years and snow depth in the previous winter bring complementary information.
As expected, the difference between predictive and explanatory power was greatest for the most complex models, as these have more parameters and thus a higher risk for overfitting. However, in the species interaction models the predictive power did not improve when applying the so called sparse interactions model (Supporting information). Hence, the results shown here correspond to the model without variable selection.
When assessing how the predictive powers varied among species, we observed that in the model variant including previous summer temperature, the predators achieved generally the highest and the herbivores the lowest predictive power (Supporting information), whereas no systematic differences were observed among the trophic groups in the model variant including previous winter snow depth as the climatic predictor (Supporting information). In both model variants, the predictive power increased with species prevalence in the abiotic model but not in the species interactions model (Supporting information), suggesting that accounting for species interactions increased the predictive power especially for the rare species.
Variance partitioning among the explanatory variables showed that arctic arthropod communities are characterized by marked seasonal variation (week and its square explained more than half of all explained variation in all but species interaction models, Fig. 3). Climatic conditions also affected arthropod community dynamics, but only to a lesser extent (sampling week temperature and previous summer temperature/previous winter snow depth explained altogether 12-17% of the variation in all but species interaction models). Concerning the biotic predictors, density-dependence within species explained only a minor part (3%), trophic interactions a substantial part (20-22%) and species interactions a major part (51%) of the variation.
For most species, the responses of arthropod species to the previous week's temperature were positive with at least 90% posterior probability, reflecting the positive effect of withinseason temperature on arthropod activity and thus detectability (Fig. 3). In all four models, the response of arthropod species was negative to the second-order effect of week, reflecting the peak in arthropod's occurrence at intermediate weeks. Within-species density dependence was predominantly positive in the few cases that showed an effect (Fig. 3). In the trophic-interactions model, parasitoid and predator abundances positively influenced many species, especially saprophages and detritivores (Fig. 3). In contrast, saprophage and detritivore abundance influenced many species negatively, especially herbivores (Fig. 3). Herbivore abundance influenced parasitoids positively and saprophages and detritivores negatively (Fig. 3). In the species-interactions model, no interactions were supported with over 90% posterior probability, demonstrating the difficulty of inferring exactly which species pairs are driving the community-level trends.
Discussion
Developing accurate predictive models of biodiversity change is a priority for global-change science. A topical issue in formulating such models is whether and how species interactions should be accounted for, especially when modelling species-rich communities (Gilman et al. 2010, Blois et al. 2013). Our study of arctic arthropods demonstrates that accounting for interspecific interactions either at the species or trophic group level indeed improves the predictive performance of time-series models.
In demonstrating the importance of biotic interactions for understanding community dynamics, we derive support from and build on several studies that have improved predictive performance by including the abundances of one or a few interacting species as predictors in single-species distribution models (Araújo and Luoto 2007, Heikkinen et al. 2007, Wisz et al. 2013, le Roux et al. 2014, Mod et al. 2015. In our study, we have extended this approach to a communitywide perspective, showing how to account for a complex interactive network via joint species distribution models that describe community-level dynamics. Interestingly, accounting for within-species density dependence did not improve predictive power, whereas accounting for inter-specific interactions improved the predictive powers of the models. Given the imperative need to scale up from species-level to community-level projections of biodiversity change, these results represent a major step forward for predictive ecology. Arthropods are fundamentally important ecosystem components, both in species numbers and in contributions to ecosystem functioning (Schmidt et al. 2017). We report an alarming rate of arthropod decline in a High Arctic site, particularly in predators and saprophages/detritivores. Our study site is essentially untouched by human activity, except for climate warming, which is progressing twice as quickly in the High Arctic as the global average (Kattsov et al. 2005, IPCC 2014, Post et al. 2019. Declines in arctic arthropod species richness and abundances of several taxonomical and functional groups have previously been reported in relation to climate warming (Høye et al. 2013, Koltz et al. 2018a, Loboda et al. 2018, Gillespie et al. 2019. This is the first study presenting comprehensive community-level results with species-resolution data, allowing us to compare the rates of decline among groups. Our observation of declining trends among arthropods echoes those of Koltz et al. (2018a), who partially used samples from the same study area. A key difference is in the information inferred from species-versus group-level data. Where Koltz et al. (2018a) used aggregate counts of higher taxonomic ranks, we used data on each species in the community to build support for emergent change. That is, rather than assuming that a group-level response reflect consistent change among the species in the group, we directly evaluated this assumption by observing species-level changes within and among groups. Taken together, the results support the interpretation that the food-web structure of arctic arthropod communities has changed markedly over the last decades (Kankaanpää et al. 2020). From our results, it is clear that the trophic-web structure is radically changing in the High Arctic, and that this affects arthropod community composition much more profoundly than if impacts derived solely from the direct effects of climatic changes. Especially predators and saprophage/ detritivores have declined disproportionally compared to the other groups (this study, Koltz et al. 2018a). Our result show that the trajectory of community change will be driven not by each species responding individually to the changing environment, but by complexed lagged effects between trophic groups. As such, our findings add evidence to reports on insect decline (Cardoso and Leather 2019, Sánchez-Bayo and Wyckhuys 2019) and add to concerns regarding subsequent changes in ecosystem functioning. Working in Alaska, Koltz et al. (2018b) found changes in climate to reverse top-down effects of predators on belowground ecosystem function, mediated by changes in mesofauna (springtails [Collembola] and mites [Acari]). Our study shows disproportionate changes among macrofauna with a detritivorous larval stage (Diptera, Insecta), suggesting an Arctic-wide pervasive impact of climate change on detritivores.
The result that different arthropod trophic levels are differently affected by climate warming in the Arctic is consistent with previous reports of shifts between specific trophic levels. With respect to plant-herbivore interactions, experimental (Roy et al. 2004, Liu et al. 2011, de Sassi and Tylianakis 2012, Birkemoe et al. 2016) and observational studies (Barrio et al. 2017, Rheubottom et al. 2019, Kankaanpää et al. 2020 report changes in arthropod-driven herbivory. With respect to predator-detritivore interactions, experimental studies suggest that effects of climate change can cascade through other trophic levels, thereby altering critical ecosystem functions (Koltz et al. 2018c). In our study system, we note that a previous attempt to detect trophic cascades from data on densities of a single predator spider species Pardosa glacialis was unsuccessful, a finding attributed to the complexity of the local food web and the many outflows from an increased predation pressure (Visakorpi et al. 2015). Likewise, the current result that accounting for density dependence did not improve model predictions may reflect the fact that capturing density dependence in arthropods is challenging, due to highly variable population dynamics (Hanski et al. 1990) and/or to the fact that trapping success is influenced both by abundance and activity. In this study, we included a higher level of complexity in the analyses by jointly analysing 52 arthropod species, classified by trophic group, enabling us to detect trophic-cascade effects across the arthropod community.
Some of the trophic-group effects are easy to interpret, such as the positive influence of the previous year's herbivore abundance on parasitoids. Other results are more difficult to interpret, such as the positive influence of parasitoids and predators on saprophages/detritivores, and the negative influence of saprophages/detritivores on herbivores. Whether these effects are due to direct or unknown species interactions or some (delayed) responses of the species to environmental conditions not included in our modelling is an important question for future studies. Important but unaccounted abiotic predictors may for example include snow and microclimatic conditions (Kankaanpää et al. 2018(Kankaanpää et al. , 2020a. Thus, we note that variation assigned to trophic groups may in some cases represent covariation with unmeasured environmental covariates rather than the direct effects of biotic interactions.
However, we stress that identifying the exact causality is not necessary for our core inference: that accounting for interspecific associations is necessary for successfully predicting how arthropods respond to climate change. Regardless of whether the parameters of the species and trophic interaction models relate causally to biotic interactions or to synchronous responses to unmeasured environmental conditions, they were successful in improving predictive power. However, this does not imply that identifying the causal components is not important. In fact, a model that makes better predictions 'for the wrong reasons' will be able to successfully predict future patterns only for as long as the relationship between the causal factors and the apparent patterns of species interactions also remain constant into the future.
We found that models using only climatic predictors generated predictions that were opposite to observations. This adds an important caveat to the many studies using correlative, climate-envelope approaches to model the response of biodiversity to climate change (Pacifici et al. 2015, Warren et al. 2018, Trisos et al. 2020. Our results also illustrate the absolute difficulty of predicting species communities, given poorer predictive performance versus explanatory performance, and attest to the increased risk of overfitting when model complexity increases with the inclusion of more species and environmental predictors. While our conclusions are based on arctic arthropods, since climate warming and species interactions are ubiquitous, we suggest that our approach serves as a conceptual and methodological template for deriving predictive models for other organism groups and ecosystems.
Data accessibility statement
The data and code for replicating the results are deposited in Dryad Digital Repository <https://doi.org/10.5061/dryad. cc2fqz65p>. | 8,103.2 | 2021-03-22T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
A Third-Order Newton-Type Method for Finding Polar Decomposition
and the unitary factorU ∈ Cm×n is unique ifA is nonsingular [1]. The exponent 1/2 denotes the principal square root, that is, the one whose eigenvalues lie in the right half-plane. Here, we assume thatm ≥ n. This matrix decomposition has many applications in various fields. To give an example, general 3 × 3 linear or 4 × 4 homogenous matrices can be formed by composing primitive matrices for translation, rotation, scale, and so on. Current 3D computer graphics systems manipulate and interpolate parametric forms of these primitives to generate scenes and motion [2]. Hence, decomposing a composite matrix is necessary. This paper follows one of such ways, known as the polar decomposition (1). Practical interest in the polar decomposition stems mainly from the fact that the unitary polar factor of A is the nearest unitary matrix to A in any unitarily invariant norm [3]. Apart from (1), the polar decomposition can be defined by the following integral formula [4]:
Introduction
The polar decomposition of ∈ C × factors as the product = , * = , rank () = = rank () , where is unitary and of order is Hermitian positive semidefinite.The Hermitian factor is always unique and can be expressed as and the unitary factor ∈ C × is unique if is nonsingular [1].The exponent 1/2 denotes the principal square root, that is, the one whose eigenvalues lie in the right half-plane.Here, we assume that ≥ .This matrix decomposition has many applications in various fields.To give an example, general 3 × 3 linear or 4 × 4 homogenous matrices can be formed by composing primitive matrices for translation, rotation, scale, and so on.Current 3D computer graphics systems manipulate and interpolate parametric forms of these primitives to generate scenes and motion [2].Hence, decomposing a composite matrix is necessary.This paper follows one of such ways, known as the polar decomposition (1).
Practical interest in the polar decomposition stems mainly from the fact that the unitary polar factor of is the nearest unitary matrix to in any unitarily invariant norm [3].
Apart from (1), the polar decomposition can be defined by the following integral formula [4]: Formula (3) illustrates a guiding principle that any property or iteration involving the matrix sign function can be converted into one for the polar decomposition using the replacement 2 by * and vice versa.
Here, we concentrate on the iterative expressions for finding (1), since integral representation (3) has some complicated structure and requires complex analysis.
Newton's method for square nonsingular cases introduced in [5] is as follows: while its following alternative for general rectangular cases was considered in [6] as wherein † stands for the Moore-Penrose generalized inverse.Note that, throughout this work, − * stands for ( −1 ) * .Similar notations are used throughout.
Advances in Numerical Analysis
Authors in [7] derived important results for (4).They discussed that although Newton's method for the polar decomposition immediately destroys the underlying group structure, when ∈ U, it forces equality between the adjoint and the conjugate transpose of each iterate.This implies that the Newton iterates approach the group at the same rate that they approach unitarity.
The cubically convergent method of Halley has been developed for polar decomposition in [8] as follows: An initial matrix 0 must be employed in matrix fixedpoint type methods such as (4)-( 6).An initial approximation for the unitary factor of any matrices can be expressed as whereas > 0 is an estimate of ‖‖ 2 .
The remaining sections of this paper are organized as follows.In Section 2, we derive an iteration function for polar decomposition.The scheme is convergent to the unitary polar factor , and the rate of convergence is three since the proposed formulation transforms the singular values of the matrices produced per cycle with a cubical rate to one.Some illustrations are also provided to support the theoretical aspects of the paper in Section 3. Finally, conclusions are drawn in Section 4.
A Third-Order Method
The procedure of constructing a new iterative method for the unitary factor of is based on applying a zero-finder to a particular map.That is, solving the following nonlinear (matrix) equation in which is the identity matrix, by an appropriate rootfinding method could yield new iteration functions (see, e.g., [9,10]).Therefore, we first introduce the following iterative expression for finding the simple zeros of nonlinear equations: with Theorem 1.Let ∈ be a simple zero of a sufficiently differentiable function : ⊆ C → C for an open interval , which contains 0 as an initial approximation of .Then, iterative expression (9) has third order of convergence.
Proof.The proof is similar to the proofs given in [11].So, it is skipped over.
Drawing the attraction basins of ( 9) for finding the solution of the polynomial equation 2 −1 = 0 in the complex plane reveals that the application of ( 9) for finding matrix sign function and consequently the polar decomposition has global convergence (see Figure 1).However, it is necessary to show this global behavior analytically.
Let denote the boundary of the set .One of the basic notions in the fractal theory connected to iterative processes and convergence of an iterative function is Julia set for the proposed operator .Thus, when → ∞, we obtain Furthermore, we can conclude that the basins of attraction (−1) and (1) in the case of operator are the halfplanes on either side in relation to the line = 0 (the imaginary axis).Since ±1 are attractive fixed points of , so the Julia set () is the boundary of the basins of attraction (−1) and (1); that is, Actually, the Julia set () is just the line = 0 for (12), and thus the new third-order method ( 11) is globally convergent.
By taking into account this global behavior, we extend (11) as follows: where = * , = , and 0 is given by (7).
Proof.Let have the following SVD form: where and = rank().Zeros in Σ may not exist.We define the following sequence of matrices: On the other hand, using (15), one may obtain Since 0 is diagonal with positive diagonal and zero elements, it follows by induction that the sequence { } ∞ =0 is defined by Accordingly, (19) represents uncoupled scalar iterations as follows: Simple manipulations yield the relation That is to say, Therefore, → and subsequently = * .The proof is complete.
Theorem 3. Let ∈ C × be an arbitrary matrix.Then, new method (15) has third order to find the unitary polar factor of .
Proof.Proposed scheme (15) transforms the singular values of according to and leaves the singular vectors invariant.From (25), it is enough to show that convergence of the singular values to unity has third order for ≥ 1 as follows: Now, we attain This reveals the third order of convergence for new method (15).The proof is ended.
The proposed method is not a member of Padé family of iterations given in [12], with global convergence.So, it is interesting from both theoretical and computational points of view.
The speed of convergence can be slow at the beginning of the process; so, it is necessary to scale the matrix before each cycle.An important scaling approach was derived in [13] in Frobenius norm as comes next: So, the new scheme can be expressed in the following accelerated form: Compute by (28) , ≥ 0, (29)
Numerical Examples
We have tested contributed method (15) denoted by PMP, using the programming package Mathematica 8 in double precision [14].Apart from this scheme, several iterative methods, such as (5) denoted by NMP and (6) denoted by HMP, and accelerated Newton's method given by Compute by (28) , ≥ 0, have been tested and compared.The stopping termination in this work is wherein is the tolerance.
Example 1.In this experiment, we compare the behavior of different methods.We used the following six complex randomly generated rectangular 310 × 300 matrices: m = 310; n = 300; number = 6; SeedRandom [345]; The results of comparison are carried out in Tables 1 and 2 applying the tolerance = 10 −10 with 0 = .It could easily be observed that there is a clear reduction in the number of iterations using PMP.
These theoretical results of Section 2 have been confirmed by numerical examples here.So, we demonstrate the convergence behavior of proposed iteration (15).Note that the superiority of PMP in contrast to HMP is understandable from the fact that PMP has larger attraction basins, and subsequently it could cluster the singular values to unity much faster than HMP.
Discussion
Matrix decomposition is well established as an important part of computer graphics.Just as every nonzero complex number = admits a unique polar representation with ∈ R + , (−, +], every matrix can be decomposed into a product of the unitary polar factor and a positive semidefinite matrix .The polar decomposition is of interest in many applications, for example, whenever it is required to orthogonalize a matrix.In this paper, we have developed a new method for finding the unitary polar factor .It has been shown that the convergence is global and its rate is three.Scaling form of our proposed method has also been given.From numerical results, we observe that accuracy in successive approximations increases, showing stable nature of the method.Also, like the existing methods, the presented method shows consistent convergence behavior.Further improvement of convergence rate can be considered for future studies.
Table 1 :
Results of comparisons for Example 1 in terms of number of iterations.
Table 2 :
Results of comparisons for Example 1 in terms of elapsed time (s). | 2,231.6 | 2014-09-30T00:00:00.000 | [
"Mathematics"
] |
Research on Asphalt Pavement Disease Detection Based on Improved YOLOv5s
Pavement disease detection and classi fi cation is one of the key problems in computer vision and intelligent analysis. This is an automated target detection technology with great development potential, which can improve the detection e ffi ciency of road management departments. The research based on the convolutional neural network is aimed at realizing asphalt pavement disease detection based on low resolution, occlusive interference, and complex environment. Considering the powerful function of the convolutional neural network and its successful application in object detection, we apply it to asphalt pavement disease detection, and the detection results are used for subsequent analysis and decision-making. At present, most of the research on pavement disease detection focuses on crack detection, and the detection of multiclass diseases is less, and its detection accuracy and speed need to be improved, which does not meet the actual engineering application. Therefore, a rapid asphalt pavement disease detection method based on improved YOLOv5s was proposed. The complex scene data enhancement technique was developed, which is used to enhance and extend the original data to improve the robustness of the model. The improved lightweight attention module SCBAM was integrated into the backbone network, which can enhance the feature extraction ability and improve the detection performance of the model for small targets. The spatial pyramid pooling was improved into SPPF to fuse the input features, which can solve the multiscale problem of the target and improve the reasoning e ffi ciency of the model to a certain extent. The experimental results showed that, after the model is improved, the average accuracy of pavement disease reaches 94.0%. Compared with YOLOv5s, the precision of the improved YOLOv5-pavement is increased by 3.1%, the recall rate is increased by 4.4%, the F1 score is increased by 3.7%, and the mAP is increased by 3.8%. For transverse cracks, longitudinal cracks, mesh cracks, potholes, and repaired pavement, the detection accuracy of pavement disease detection method based on YOLOv5-pavement is improved by 3.4%, 3.1%, 4.0%, 7.5%, and 4.8%, respectively, compared with that based on YOLOv5s. The proposed method provides support for the detection work of pavement diseases.
Introduction
Asphalt pavement is durable, comfortable to drive on, smooth, and has other characteristics, that make it widely used in road construction. During the service of asphalt roads, it is easy to produce diseases due to vehicle load and water intrusion, which affect the comfort of automobile driving and road traffic safety. Therefore, to ensure the normal service of roads, timely monitoring, evaluation, and repair of road diseases are very important and necessary. At present, the detection methods of pavement diseases mainly include visual inspection, camera survey, and ground pene-trating radar [1,2]. Visual inspection relies on human eye observation or video to observe road disease, which is inefficient and has large errors. Camera measurement requires the use of professional detection vehicles driving along the lane and taking images of the road surface, and the use of postprocessing software to identify the types of pavement diseases. This method has high detection accuracy, but the detection cycle is long and the processing links are too many, resulting in low efficiency. Ground penetrating radar (GPR) can detect most pavement diseases by transmitting an electromagnetic pulse [3], but it is difficult to apply on a large scale because of its high cost. Zhou et al. [4] put forward a focus on intelligent road disease detection and summarized the commonly used detection equipment in intelligent road disease detection technology, including cameras, GPR, LiDAR, and IMU. The evolution and development of road disease detection technology are described systematically, which is of practical significance to the development of road disease detection technology in the future. Therefore, it is of great practical significance for developing an automated pavement disease detection method based on images and deep learning.
With the continuous development of computer technology, object detection algorithm based on deep learning have achieved rapid development and has been widely used in autonomous driving, face recognition, crop disease and pest recognition, defect detection, and other fields [5][6][7]. There are two main types of object detection algorithms based on deep learning: One is a two-stage target detection algorithm that divides feature extraction and target localization into two stages, such as R-CNN [8,9] (Region Proposals for Convolutional Neural Networks), Fast R-CNN [10], and Faster R-CNN [11][12][13]. The second category is a one-stage target detection algorithm that integrates feature extraction and location processing, such as SSD [14] (Single Shot Multibox Detector) and YOLO [15,16] (You Only Look Once) series. YOLOv5s is an improved version of the YOLO series [17][18][19], with a simple training process, small physical space occupation, and high detection accuracy that can be used for the recognition and detection of asphalt pavement diseases. YOLOv5s extracts target features through multilayer convolution and pooling operations, which will result in a large amount of feature information loss and low accuracy of the target. The enhancement module can be used to improve the feature extraction ability of the model and help with the detection task. In the industrial field, it is proposed to change the last part of the deep residual network (ResNet) structure into a deformable convolution for lightweight improvement [20]. The feature fusion structure was improved to improve the positioning accuracy and recognition rate [21]. The function detection scale was added [22], and the NMS is replaced by the introduction of Distance Intersection over Union nonmaximum suppression (DloU NMS) [23,24], which can suppress error detection and enhance the detection capability for small targets. The performance of the model is greatly improved by adding a small target detection layer, Squeeze and Excitation Networks (SENet) [25], introducing CIOU loss function, and using migration learning methods [26].
However, in the research of road surface disease recognition based on deep learning, existing road surface disease detection methods mainly focus on crack detection, and there are few studies on multiclass disease detection. For example, Zhu [27] proposed a pavement crack detection algorithm based on defect image segmentation and image edge detection. Wu et al. [28] proposed a novel and efficient image-processing method for extracting pavement cracks from blurred and discontinuous pavement images. And the crack detection network is based on a multiscale extended convolution module and upsampling module proposed by Song et al. [29]. Li et al. [30] describe an innovative vision-based pavement crack detection strategy that provides a direct pavement surface condition index (PCI) for specific pavement locations. This strategy uses a convolutional neural network (CNN) algorithm to mine a database containing more than 5000 pavement damage images to classify pavement crack types, and the overall pavement damage rate detection accuracy reaches 90%. Zhang et al. [31] proposed a unified crack and seal crack detection method, which detects and separates cracks and seal cracks under the same framework. It trains the deep convolutional neural network to preclassify pavement images into cracks, sealed cracks, and background regions. A block threshold segmentation method is proposed to effectively separate crack and seal crack pixels. Finally, a curve detection method based on tensor voting was used to extract the crack/seal crack. For the detection of multiclass road diseases, Dong and Liu [32] proposed an automatic road damage detection and location method using the Mask R-CNN algorithm and active contour model. Tang et al. [33] proposed a new deep-learning framework called IOPLIN. Similarly, Zhao et al. [34] proposed DASNet, a deep convolutional neural network, which can be used to automatically identify road diseases. The network uses demorphable convolution instead of conventional convolution as the input of the feature pyramid. Before feature fusion, the same supervisory signal is added to multiscale features to reduce semantic differences. Context information is extracted through residual feature enhancement, and information loss of the top pyramid feature map is reduced. The loss function is improved according to the problem of unbalance of positive and negative backgrounds. Compared with the Faster RCNN baseline, the method achieved an improvement of 41.1 mAP and 3.4 AP. Mao et al. [35] built a framework of pavement disease recognition and perception system based on UAV, used UAV to carry out the pavement image data acquisition experiment, analyzed the pavement disease image preprocessing technology based on wavelet threshold transform, studied the pavement disease image preprocessing technology based on DPM, and proposed the pavement disease recognition method based on VGG-16 neural network model. Due to the high complexity, many parameters, and large size of these algorithms, the detection speed of many models is slow, which is difficult to meet the practical application. Therefore, it is of high research significance to improve the recognition accuracy of multiclass road diseases and ensure the lightweight model at the same time.
This research takes the four diseases of asphalt pavement, namely, transverse cracks, longitudinal cracks, mesh cracks, and potholes, and the repaired pavement, as the research object, uses the detection vehicle to collect pavement image data, improves the lightweight target detection model YOLOv5s, and develops a special fast detection model YOLOv5-pavement for various diseases of asphalt pavement. The improved attention module SCBAM is integrated into the backbone network. The spatial pyramid pooling structure is improved into SPPF; complex scene data enhancement technology is developed to enhance the training set images. The processing flow is shown in Figure 1. This method provides technical support for the accurate 2 Journal of Sensors and rapid detection of multiclass disease targets on asphalt pavement. pixels × 1840 pixels, the pixel size is 5:5 μm × 5:5 μm, the sampling spacing is 5 m, the shooting height is 2 m, and the shooting time is daytime. A total of more than 50,000 images were collected in this study, and the collection method was continuous along the road. After the collection, images containing diseases in the data set should be screened out. After inspection, 7641 images in the data set were found to have diseases. Due to certain limitations, this study tested the four diseases of transverse cracks, longitudinal cracks, mesh cracks, potholes, and repaired pavement, as shown in Figure 2.
Materials and Methods
2.1.2. Image Preprocessing. The object detection model based on deep learning is implemented based on training a large amount of image data, so it is necessary to build a data set with sufficient data volume and rich types. Firstly, 764 images were randomly selected from 7641 disease image data as the test set, and the remaining 6877 images were used as the training set. The distribution details of image samples in the data set are shown in Table 1. Secondly, to improve the training efficiency of the road disease detection model, the original 6877 images of the training set are compressed, and their length and width are compressed to 1/2 of the original image. Then, the image data annotation software "Labe-lImg" was used to draw the outer rectangular boxes of different road disease targets in the compressed road disease images to realize the road disease annotation. The images were labeled according to the smallest rectangle around the different pavement diseases to ensure that the rectangle contains as little background area as possible. Finally, to enrich the background information of the image data, data enhancement was performed on the training set to better extract the features of diseases belonging to different labeled categories and avoid overfitting the trained model. Table 1 shows that there are large differences in the number of different types of diseases. The number of transverse cracks, longitudinal cracks, and mesh cracks is sufficient, and only background information enhancement can be used to build a rich data set. The number of potholes and repaired pavements is seriously insufficient, which is unbalanced with other diseases in terms of quantity, and there are many small targets of potholes, so it needs to be strengthened and expanded. In the actual detection task, complex environmental factors, such as light, shadow, and water are the key to affect the accuracy of the model. Due to the different causes of asphalt pavement diseases, transverse cracks and longitudinal cracks have strong directionality, so the traditional data enhancement methods of stretching and rotation cannot be used to enhance pavement disease data sets. To improve the detection performance of the algorithm model in a complex environment and change the problem of unbalanced samples among multiple diseases, this study referred to the possible complex environment of the road surface and proposed the complex scene data enhancement technology based on the function in the python data enhancement library "Imgaug", which changed the problem of the unbalanced number of different The detailed steps of complex scene data enhancement are as follows: the complex scene data enhancement is divided into two parts. In the first part, the size of the Gaussian noise, fog, rain, snow, mud, and other environmental noise is determined by calling the modules of "Imgaug" library, and then it is gradually added to the original image to imitate the road environment. The second part is to invoke the modules of motion blur, brightness adjustment, and saturation in "Imgaug" library and add them to the original image to mimic the weather environment. In the first part, 2 or 3 enhancement methods are randomly selected, and then in the second part, 1 enhancement method is randomly selected and added to the original image, respectively, to complete the image enhancement process.
The original image data were enhanced by complex scene data enhancement technology, and the amount of data in each category was enhanced to about 2500. The final training set consisted of 12500 images, which were used as the training data set of the pavement disease detection model, including 5623 enhanced images and 6877 original images. The training set data enhanced by complex scene data is shown in Figure 3.
The Structure of YOLOv5s.
YOLOv5 is the latest in the YOLO series. The network model detection accuracy is high, the inference speed is fast, and the fastest detection speed can reach 140 frames per second (FPS). The size of the YOLOv5 target detection network model is small, only 14.4 M. Therefore, the advantages of the YOLOv5 network are high detection accuracy, lightweight, and fast detection speed.
The YOLOv5s consist of four parts, as shown in Figure 4, including input, backbone, neck, and prediction. Input mainly contains the preprocessing of data. Backbone extracts image features at different levels through multilayer convolution operations. Neck network consists of two parts, including a feature pyramid network (FPN) and the path 2.3. Improvement of YOLOv5s. The pavement disease detection algorithm needs to accurately identify various diseases under various circumstances in the complex pavement environment. Considering the simple structure of YOLOv5s, the weak feature extraction ability, and the complex pavement environment, these factors will lead to the low detection accuracy of the pavement disease detection algorithm. The attention mechanism has been proven to enhance the ability of model-degree feature extraction. Adding attention opportunity to the road surface disease detection model is helpful to improve the detection accuracy for complex environments and small targets. Therefore, this study improves the backbone network of YOLOv5s by adding the improved attention module SCBAM to improve the feature extraction ability of the model, which can improve the detection accuracy of the model in complex scenes and strengthen the detection ability of small targets. At the same time, in order to solve the multiscale problem of the target to a certain extent and improve the reasoning speed of the model, this paper improved the SPP module to the SPPF. In order to achieve the high reasoning speed of the model and improve its detection accuracy, the asphalt pavement disease detection model can be optimized and improved.
CBAM Attention
Module and Its Improvement. The convolutional block attention module (CBAM) is lightweight. CBAM processes the input feature map through the internal channel attention module and spatial attention module to simplify Feature extraction and improve the detection speed of the model. The action mechanism is weighted fusion (Equations (1) and (2)). In Equation (1), F is the input feature map. F 1 is the feature map obtained by channel attention weighting. F is the feature map obtained by spatial attention weighting. M c ðF 1 Þ is the channel attention output weight. M s ðF 1 Þ is the spatial attention output weight. The CBAM attention module and its improvement are shown in Figure 5.
The mechanism of the channel attention module is as follows: the input feature map F is processed by global max pooling and global average pooling to obtain two 1 × 1 × c feature maps. The multilayer perceptron (MLP) was fed with them, and the feature map of channel attention was generated by the addition operation of element-wise on the output feature map of MLP, which was processed by the Sigmoid activation function. Finally, M c an input feature map F is processed, and an element-wise multiplication The principle of spatial attention is the spatial dimension is unchanged, and the channel dimension is compressed. This module focuses on the location information of the target. Its operation flow is as follows: F 1 is taken as the input feature map of this module. After maximum pooling and average pooling, the tensor with the size of H is generated, and then the two are stacked together by concat operation, and then the channel is changed to 1 by convolution operation, with H and W unchanged. M s is obtained by activation function (Sigmoid), and finally M s and F 1 are multiplied to get the generated features.
The traditional spatial attention module extrudes image spatial information using average pooling and maximum pooling. The error of feature extraction comes from the increase of estimated value variance caused by the limitation of neighborhood size and the deviation of the estimated mean caused by the error of convolution layer parameters. The former can be reduced by average pooling to retain more background information, while the latter can be reduced by maximum pooling to retain more detailed tex-ture information. However, this extrusion method does not make full use of the spatial information of the image and does not capture the spatial information of different scales to enrich the feature space. Moreover, spatial attention only considers the information of local regions, but cannot establish long-distance dependence. In this paper, based on squeezing image spatial information using average pooling and maximum pooling, Stochastic-pooling is added to enrich spatial information and retain detailed texture information. Stochastic pooling assigns a probability to pixels according to their numerical size and then subsampling according to the probability. The improved spatial attention module adopts a Stochastic pooling operation on the input feature map, and then the spatial attention feature map is obtained by concatenating three feature descriptors and using 3 × 3 convolution kernel for operation. This scheme is helpful to obtain more feature information and strengthen the feature extraction ability of the algorithm model for small targets.
In this study, the improved CBAM attention module is named SCBAM and added to backbone's C3 module, as shown in Figure 6. The function of the attention module is 7 Journal of Sensors to give more weight to important areas. When outputting diseases, it will focus on the corresponding areas in the picture to improve the feature extraction ability of road diseases.
Improvement of Spatial Pyramid
Pooling. Spatial pyramid pooling (SPP) was proposed by Microsoft in 2015 [36]. It can transform the input features of any scale into the same dimension and output the features after block stacking in parallel, which can solve the problem of underutilization of deep semantic information. Different SPP structures are shown in Figure 7, and the feature extraction process includes the following: firstly, the input feature map is partitioned by pooling of 1 × 1, 2 × 2, and 4 × 4 sizes, and 21 subblocks are obtained. Secondly, the maximum value is selected from the 21 subblocks, so that the input feature map of arbitrary size is converted into a fixed size of the 21-dimensional feature. Finally, the 21-dimensional features are stacked and pooled to complete the whole SPP process. Compared with the original SPP structure, the new SPP has one more convolution block (Conv, Batch Normalization, Leaky ReLU, CBL) before and after the new SPP, and the size of the pool core in the middle is 1 × 1, 5 × 5, 9 × 9, and 13 × 13, respectively.
The SPP module realizes the fusion of local features and global features, enriches the expression ability of the final feature mAP, and thus improves the mAP. However, the parallel pooling method is time-consuming. Therefore, in this paper, the improved spatial pyramid pooling structure (SPPF) with higher efficiency is adopted for pooling processing, and the deep semantic information expression ability is enhanced by stacking features after multicore pooling. Its structure is to change the pooling mode of input feature layers from parallel to serial, and stack pooling is carried out after multiple convolutions in serial mode to increase the image receptive field and enrich feature information. SPPF uses three 5 × 5 serial pooling layers instead of a single 13 × 13 pooling layer to obtain the same processing effect, but the calculation speed is significantly improved. Under the conditions of the same basic convolution code, the number of input and output layers is 32 and 128, respectively, and the inference time of SPP is 0.5373 s, while the inference time of SPPF is 0.2078 s. SPPF takes 0.3295 s less than spp.
Based on the improvement of the YOLOv5s detection task for multiclass asphalt pavement diseases, the overall framework of asphalt pavement disease detection was proposed and named YOLOv5-pavement. The attention module was added to the backbone network, the SPP module was improved to SPPF, and the data set was enhanced by using data enhancement technology. Its overall framework is shown in Figure 8. The initial learning rate, weight attenuation coefficient, training momentum, confidence, IOU threshold, and Epoch of the pavement disease detection model training based on YOLOv5-pavement were set to 0.001, 0.0005, 0.9, 0.5, and 300, respectively. After training, the weight file of the pavement disease detection model was saved, the test set was validated, and the results were used to evaluate the performance of the model. The final output of the network is the identification of the five target boxes and the probability of belonging to a specific category.
Evaluation Indicators of Model.
Precision, recall, F1 score, mean average precision (mAP), space occupied by the model, and reasoning speed were selected as the evaluation indexes of each model.
Precision and recall are important indexes for evaluating model accuracy, as shown in Equations (3) and (4).
where P is the precision, R is the recall, TP is the true positives, FP is the false positives, and FN is the false negatives. The F1 score is a measure of classification problems. It is the harmonic average of the accuracy rate and recall rate, as shown in Equation (5): where P is the precision and R is the recall. AP and mAP are calculated, as shown in Equations (6) and (7).
AP at 0.5 is defined as when the IOU threshold is 0.5, all detection results of a certain type of sample with n positive examples are arranged in descending order of confidence, and each additional positive example corresponds to a Precision value (P i ). The average value of P is obtained to obtain AP at 0.5, and mAP at 0.5 is the mean value of all AP classes. n is the number of detected categories. The space occupied Figure 8: The structure of YOLOv5-pavement.
Results of Training and Validation.
The change in the training loss value of YOLOv5-pavement is shown in Figure 9, where the results show that the loss value drops sharply when the number of iterations is from 0 to 20. From 20 to 300 times, the change range of loss value was stable, showing a slow downward trend. After 300 iterations, the loss value tends to be stable around 0.02, and the model reaches a relatively stable state to complete the training. The results in the figure show that the model is well trained without overfitting.
Analysis of Pavement Disease Detection.
In order to verify the performance of the pavement disease detection model 10 Journal of Sensors based on YOLOv5-pavement proposed in this study, the recognition results of the model on 764 images in the test set were further analyzed. There are 910 targets to be detected in the 764 test set images, including 307 targets for transverse cracks, 253 targets for longitudinal cracks, 248 targets for mesh cracks, 39 targets for potholes, and 63 targets for repaired pavement. The pavement disease detection results based on YOLOv5-pavement proposed in this study are shown in Table 2 and Figure 10, including the precision, recall, and mAP of each type of detection target. The pavement disease detection and validation results based on YOLOv5s in Table 2 and Figure 10 shows that the average precision of transverse cracks, longitudinal cracks, mesh cracks, potholes, and repaired pavement is 88.5%, 88.9%, 92.7%, 87.2%, and 87.3%, respectively, and the mAP is 89.7%. In the pavement disease detection validation based on YOLOv5-pavement, the average precision of transverse cracks, longitudinal cracks, mesh cracks, potholes, and repaired pavement is 91.9%, 92.0%, 96.7%, 94.7%, and 92.1%, respectively, and the mAP is 93.5%. In the pavement disease detection validation based on YOLOv5-pavement, the F1 score of transverse cracks, longitudinal cracks, mesh cracks, potholes, and repaired pavement is 91.7%, 91.1%, 94.1%, 94.9%, and 88.9%. The F1 score of the model is 92.0%. Figure 10 shows that the detection average precision of YOLOv5s for transverse cracks, longitudinal cracks, pot-holes, and repaired pavement is lower than 90%. This is because many transverse cracks, longitudinal cracks, and potholes collected in this study exist in the form of small targets, while the feature extraction ability of YOLOv5s is weak, resulting in the poor detection ability of small targets and the low average precision of potholes. The characteristics of the repaired pavement are often not obvious, and the feature extraction ability of YOLOv5s is weak, resulting in low mAP of the repaired pavement. Compared with the pavement disease detection method based on YOLOv5s, the average precision of YOLOv5-pavement for transverse cracks, longitudinal cracks, mesh cracks, potholes, and repaired pavement had been improved by 3.4%, 3.1%, 4.0%, 7.5%, and 4.8%, respectively. The detection results show that the improved scheme in this paper has a positive effect on the detection of pavement diseases, effectively improves the feature extraction ability of the pavement disease detection model and the detection ability of small targets, and significantly improves the average precision of the four types of asphalt pavement diseases and the repaired pavement. Figure 11 showed the detection results of the two models for five types of targets in the complex environment. It can be found that the pavement disease detection model based on YOLOv5s misses mesh cracks under strong light conditions and misses potholes with small areas under weak light conditions. The pavement disease detection model based on YOLOv5-pavement had not been disturbed by
11
Journal of Sensors the environment, the mesh cracks were completely detected, and all the potholes were detected in the low-brightness environment.
Ablation Experiment and Comparison of Different
Models. To verify the lifting effect of each module of the improved algorithm, ablation experiments were conducted in this paper. Based on YOLOv5s, the CBAM module, SCBAM module, and SPPF were added, respectively, to explore the influence of the improved algorithm on the model. The ablation experiment results are shown in Table 3: The ablation experiment results show that the precision, recall, F1 score, mAP, and reasoning speed of the plan B model are improved by 0.6%, 0.8%, 0.7%, 0.4%, and 4FPS, respectively, compared with plan A by adding the SPPF module. The results show that SPPF can better integrate the feature information of the target in the multiscale target detection task, improve the detection accuracy of the algorithm, and accelerate the reasoning speed of the model to a certain extent. Compared with plan A, the precision of plan D integrated with the improved SCBAM attention module is increased by 2.8%, the recall is increased by 3.2%, the F1 score is increased by 3.0%, and the mAP is increased by 2.5%. It showed that the feature extraction ability of the model is enhanced after adding SCBAM. At the same time, compared with plan C, the accuracy of the plan D model integrated with the SCBAM is increased by 0.6%, the recall is increased by 0.4%, the F1 score is increased by 0.5%, and the mAP is increased by 0.4%. It shows that the improved SCBAM retains more target features than CBAM and strengthens the feature extraction ability of the model. The results of the pave-ment disease detection model based on YOLOv5pavement in plan E show that the precision-recall F1 score and mAP are increased by 3.1%, 4.4% 3.7%, and 3.8% compared with the results in plan A, respectively. The results of ablation experiments show that the improved module has a positive effect on the model.
To verify the effectiveness of the proposed method, the proposed algorithm model is compared with other object detection algorithm models, and 200 pavement images were selected for testing. Faster R-CNN, SSD, YOLOv3, YOLOv4, and YOLOv5-pavement are proposed in this paper and are selected for lateral comparison experiments. The results are shown in Table 4.
Experimental results show that the average accuracy of the pavement disease detection model based on YOLOv5pavement was slightly lower than the two-stage algorithm faster R-CNN and was second only to YOLOv4 in the onestage algorithm, reaching 93.5%. Compared with SSD and YOLOv3, YOLOv5 pavement saw mAP increases of 7.4% and 9.6%, respectively. In terms of the space occupied by the model, the YOLOv5 pavement is the smallest, at only 14.1 M, and the size of the model is only 7.5% of SSD and 6.0% of YOLOv3. Moreover, YOLOv5-pavement has obvious advantages in reasoning speed, with FPS reaching 82, which is 34 FPS, 70 FPS, 26 FPS, and 18 FPS faster than SSD, faster R-CNN, YOLOv3, and YOLOv4, respectively. SSD, Faster R-CNN, YOLOv3, and YOLOv4 spent 16.1 s, 29.3 s, 14.2 s, and 14.0 s, respectively, in detecting 200 pavement images, while the time of YOLOv5-pavement proposed in this paper was only 12.6 s. The method in this paper has the fastest detection speed when dealing with the same detection task. Figure 11: Detection results of two models in a complex environment.
Discussion
The advantages of the pavement disease detection method proposed in this paper are reflected in the following aspects: first, the algorithm model can detect and identify multiclass diseases, which is not possessed by many pavement disease detection models. Secondly, the model has high stability in a complex environment and can adapt to various complex detection environments. Finally, the detection accuracy of the improved YOLOv5 pavement is significantly improved, and the high reasoning speed and lightweight are maintained, which has the potential for large-scale deployment and application. YOLOv5 includes four different architectures (YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x), whose main difference is the number of feature extraction modules and convolution at specific locations in the network. The size of the model and the number of model parameters increase in each of the four architectures. So YOLOv5 has high flexibility, according to the actual requirements to choose different structures for development and application. This study considers the precision and reasoning speed of the pavement disease detection algorithm, considering the pavement disease detection methods depend on the road detection vehicle collected along the road, with minimal YOLOv5s size, implement the deployment of the biggest potential in the mobile terminal, can be deployed on the test vehicle and in the process of acquisition directly in pavement disease detection.
The pavement disease detection method proposed in this study relies on images to detect and classify pavement diseases. This method cannot work without light at night, and there is a lot of traffic during the day, which will have a certain impact on the detection work. Due to the limitation of experimental conditions, this study only studied some possible diseases of asphalt pavement, did not detect and classify all kinds of diseases of asphalt pavement, and did not consider the possible types of diseases caused by roads of other materials, which limited its application scope. On the other hand, the method proposed in this paper can only detect road surface diseases, while the diseases generated by the internal structure of the road cannot be detected, which is another limitation of this research method.
Conclusion
Aiming at the problems of lack of asphalt pavement disease detection means, low accuracy of automated detection model, poor ability of small target detection, and easy to be disturbed by environment, an asphalt pavement disease detection method based on improved YOLOv5s was proposed.
A complex scene data enhancement technique is proposed to balance different types of pavement diseases and simulate complex road environments to improve the robustness of the model. The SCBAM attention module is integrated into the backbone network to optimize the feature extraction ability of the backbone network and improve the feature extraction ability and detection accuracy of the pavement disease detection model for difficult-to-detect objects. The SPP is improved as SPPF, and the input features are further fused, which can solve the target multiscale problem and improve the reasoning speed to a certain extent. The results showed that the improved network model can effectively recognize road surface diseases. The precision, recall, F1 score, mAP, and detection speed of the model reached 91.2%, 92.9%, 92.0%, 93.5%, and 82 FPS, respectively. Compared with the detection methods based on YOLOv5s, the mAP of transverse cracks, longitudinal cracks, mesh cracks, potholes, and the repaired pavement has increased by 3.4%, 3.1%, 4.0%, 7.5%, and 4.8%, respectively, and the mAP has increased by 3.8%. The ablation experiment results show that the improved scheme proposed in this paper can improve the accuracy of the pavement disease detection method, and maintain the light weight of the model, and the reasoning speed of the model is also maintained at a high level. The results of the comparison between YOLOv5-pavement and other models show that the detection accuracy of YOLOv5-pavement has certain advantages, and the accuracy gap between YOLOv5-pavement and faster R-CNN with the highest accuracy is very small. Moreover, the reasoning speed of YOLOv5-pavement is the fastest, and the model occupies only 14.1 M. The asphalt pavement disease detection model presented in this paper has certain practical value for the detection of asphalt pavement diseases.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare no conflict of interest.
Authors' Contributions
All the authors contributed substantially to the manuscript. Original draft preparation and formal analysis were performed by Lingxiao WU. Supervision was conducted by Zhugeng Duan. Review and editing were done by Chenghao Liang. All authors have read and agreed to the published version of the manuscript. | 8,143.8 | 2023-03-15T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
THE ADALINE NEURON MODIFICATION FOR SOLVING THE PROBLEM ON SEARCHING FOR THE REUSABLE FUNCTIONS OF THE INFORMATION SYSTEM
At present, the market for IT services is quite unstable. The Gartner study shows that the cost of IT services has significantly decreased over 2015–2016. Some growth in expenses for IT services observed in 2017 did not eliminate the uncertainty of the market dynamics [1]. It should be noted that the software segment remains the largest segment in the IT market today [1]. Therefore, one of the main problems requiring attention from consumers and providers of IT services is the problem of reducing the IT service development expenses. One of the most important expense items in the IT projects of software product development is the staff expense item. This type of expenses includes, particularly, the expenses on hiring the staff to participate in the IT project, the IT salaries and emoluments, training, and upgrade of the IT project staff skills. It should also be noted that the existing project management practices recommend that the project personnel be divided into groups of permanent employees and employees hired to participate in a particular project [2]. At the same time, existing models of maturity of IT companies assume improvement of software writing processes and the IT project management in the direction of ensuring repeatability and standardization of software products [3, 4]. This approach enables an assumption of the possibility of substitution of intelligent information technologies for the IT project personnel in a number of repeatable processes and software development works provided that such a replacement is economically viable. Such a solution to the problem of reducing expenses on staff participation in the development of information systems and software products for various purposes is relevant from theoretical and applied points of view. THE ADALINE NEURON MODIFICATION FOR SOLVING THE PROBLEM ON SEARCHING FOR THE REUSABLE FUNCTIONS OF THE INFORMATION SYSTEM
Introduction
At present, the market for IT services is quite unstable.The Gartner study shows that the cost of IT services has significantly decreased over 2015-2016.Some growth in expenses for IT services observed in 2017 did not eliminate the uncertainty of the market dynamics [1].It should be noted that the software segment remains the largest segment in the IT market today [1].Therefore, one of the main problems requiring attention from consumers and providers of IT services is the problem of reducing the IT service development expenses.
One of the most important expense items in the IT projects of software product development is the staff expense item.This type of expenses includes, particularly, the expenses on hiring the staff to participate in the IT project, the IT salaries and emoluments, training, and upgrade of the IT project staff skills.It should also be noted that the existing project management practices recommend that the project personnel be divided into groups of permanent employees and employees hired to participate in a particular project [2].At the same time, existing models of maturity of IT companies assume improvement of software writing processes and the IT project management in the direction of ensuring repeatability and standardization of software products [3,4].This approach enables an assumption of the possibility of substitution of intelligent information technologies for the IT project personnel in a number of repeatable processes and software development works provided that such a replacement is economically viable.Such a solution to the problem of reducing expenses on staff participation in the development of information systems and software products for various purposes is relevant from theoretical and applied points of view.
Literature review and problem statement
At present, the problem of using intelligent ITs for automating the process of development of software products for various purposes is one of the most urgent problems of scientific studies in the IT field.At the same time, various neural nets (NN) are considered as the main tools of artificial intelligence suitable for solving this problem.For example, recommendations on application of various methods of artificial intelligence (including NN) in testing new software products are considered in [5].The issues of NN application for estimating size of the developed software are considered in [6].Application of NN for recognition of software design patterns based on the open source code of the product is considered in [7].
However, most researchers consider the works connected with classification and identification of reusable components of software products as the main field in which application of intelligent ITs is justified most of all.A review of the studies carried out in the field of NN application for identifying reusable components of software products and an original NN model for solving this problem are given in [8].Classification of software components based on their technical description is considered in [9].Application of NNs for solving the problem of estimating reusable components is considered in [10] based on the indicators of modularity, interface complexity, supportability, flexibility, and adaptability.
A similar problem is also solved in the process of selecting the repository-stored web services to form an internet environment that supports the reusable software components for both service providers and consumers.Since the potential of web services for service-oriented computations is universally recognized, the demand for an integrated infrastructure facilitating opening and publication of services is ever growing.In this connection, it was suggested in [11] to use the NN based methods of intellectual analysis of descriptions of individual web-services in a repository for solving the problem of searching for similar web-services.Application of NNs for solving the problem of classification of web-services as reusable, stand-alone software components applicable in satisfying the needs independently of or in combination with other web services is considered in [12].
However, the solutions discussed in [6][7][8][9][10][11][12] require conversion of textual descriptions of software components, their templates and web services into sets of the numerical data necessary for the NN functioning.In addition, in the course of application of the results described in [11,12], a necessity of reverse conversion of the numerical result of the NN operation to a web-service description arises in a number of cases.Such conversions are unique for each particular NN, they significantly complicate their implementation and are not applied anymore in any information system (IS) and software product development tools.This often leads to higher costs and, consequently, refusal to create and operate NNs as the elements of intellectual ITs for development of ISs or software products.
It should be noted that the initial data for the neural nets include their formal descriptions in the course of solving problems of classification, identification and search for reusable web services or software components.Such descriptions can be represented in the following ways: a) a special model based on quantitative metrics for estimating the reuse of software components [10]; b) representation of a component or service as a set of formal descriptions of their functions or interfaces (including application of WSDL and similar formal languages) [9,11]; c) a formal component description based on the diagrams of the UML class [6].
The latter method is especially promising since it allows one to solve the problems of classification, identification and search for reusable components already in the course of identifying and analyzing the requirements to the products being created.In this case, an additional problem arises: the problem of formal description of the requirements to the product or the IS realizable by reusing the existing components.It is suggested to use knowledge-oriented description of requirements.Particularly, application of ontologies in the engineering of requirements is considered in [13].Study of knowledge mining and management for formation of the requirements specification is proposed in [14].Application of the patterns based on frame networks to represent functional requirements at a knowledge level was suggested in [15].
The above analysis allows us to draw a conclusion on the prospects of e studies aimed at solving the problems of classification, identification and search for reusable components in the course of elucidation and analysis of requirements to the products being created.Such studies will significantly improve accuracy of cost estimation and reduce the time spent on development of information systems and software products.
The aim and objectives of the study
This work objective was to modify the model of the neuron enabling solution of the problem of search for a description of a reusable function for implementation of the functional requirement to the IS.This makes it possible to reduce the cost of development of IS and software products by excluding staff from the search for reusable functions.
To achieve this goal, solution of the following tasks was considered: -to modify the mathematical model of the neuron used to solve the problem of searching for the description of a reusable function; -to modify the neuron block diagram as applied to the specifics of solution of the problem of searching for the description of a reusable function; -to consider the features of practical implementation of the proposed modifications.
Results of the modification of a mathematical model of mADALINE neuron
Currently, there is a fairly large number of neuron types.However, in order to verify the possibility of using them in the problem of searching for the description of a reusable function to implement the functional requirements to the IS, it is necessary to use the neurons most simple in their implementation.Therefore, a comparative analysis of the main existing types of neurons was made.Proceeding from this analysis, it was recommended to apply a neuron of the ADALINE type [16].Neurons of this type can be used both as elementary neurons in the NN composition and independently in the problems of pattern recognition, signal processing and implementation of logical functions.
Block diagram of the ADALINE neuron is shown in
where w j is the weight factor for the input x j , j=1,..., n; q is a constant.The binary output, y j , can take values +1 or −1 depending on polarity of the analog signal, u j .The output signal, u j is compared with the external training signal, d j .The resulting error signal, e j =d j -u j , enters the training algorithm which reconstructs the weights w j so as to minimize some error function, e j , called the training criterion.A quadratic function is most often used as such a function.This makes it possible to use for training not only the algorithm synthesized by Widrow and Hoff specially for the ADALINE neuron but also a number of recurrent procedures for adaptive identification [17,18].
To apply the ADALINE neuron in solving the problem of search for a description of a reusable function to implement a functional requirement to the IS, the following should be determined: a) a formal description of the input signals, In general, the problem of searching for a description of a reusable function to implement a functional requirement to the IS requires comparison of formal descriptions of reusable functions and a formal description of the functional requirement to the IS by the Consumer of the IT services.Next, it is proposed to consider representation of the previously implemented j-th functional requirement to the IS at the level of knowledge, .Therefore, it was proposed to modify the ADALINE neuron so that it can process the representation data based on their parameters computed during operation of the adaptive linear associator.The modified ADALINE neuron will be denoted hereinafter as mADALINE.
The objective of solving the problem of searching for a reusable function description is to find such a function the reuse of which would require minimal costs for adapting the found function to the features of the functional requirement.This objective can be represented as a search for such a description of a reusable function that duplicates to the greatest possible extent description of the functional requirement to the IS.Therefore, to develop a formal description of an adaptive linear associator, it is proposed to use the description of the problem of identifying duplicate functional requirements considered in [20,21].
In general, the problem of identifying duplicate requirements can be formally described as follows.Let there be a frame net, , base Arch consisting of a set of .
K representation is a collection of objects, ob (frames, interfaces, and links) description of which is given in [19] where k is the number of the IT services generated by the IS.Each IT service, 2), e is the permissible error value.In a case when the value of e cannot be determined using expert opinion methods, it is recommended that e=0.1×Profit max [20,21].
In contrast to the problem of identifying duplicate requirements, the problem of searching for a reusable function description to implement a functional requirement to the IS requires a pairwise comparison of the Then the function (2) for describing the adaptive linear associator of the mADALINE neuron will take the following form:
K
The formal description of the weight factors, w j , should be changed most strongly.For the proposed modification of the mADALINE neuron, w j will represent not a vector but a single coefficient the formal description of which will have the following form: .
Then expression (4) can be written as follows: ( ) Pr ( ) 2. ( ) However, such a formal description of the weight factor is only of value when solving the task of searching for the description of a reusable function to implement the functional requirement to the IS.As shown in [19], to describe the results of solving this problem, it is more expedient to use the system-wide representation of the functional requirement to IS at the level of knowledge .
IS f j K
This representation can be formed on the basis of the result of computation of w j as follows: Such a system-wide representation of IS f j K can be used in the future as a basis for forming a specification for reworking of a reusable function taking into account features of the implemented Consumer's functional requirement to the IS of the IT services.
In a general case, solution of the problem of searching for description of the reusable function for implementation of the functional requirement to the IS is reduced to the choice of one of the following alternatives: a) recognition of the absence of reusable functions similar to the description of the Consumer/s functional requirement for the IS; b) searching for the description of one reusable function that is as close as possible to the description of the Consumer/s functional requirement , Expression (8) requires paying special attention to the form of the instruction algorithm.In the course of solving the problem of searching for the description of a reusable function, not only numeric but also symbolic data are processed.Therefore, conventional algorithms (for example, the normalized least-squares algorithm or the Widrow-Hoff algorithm) are not suitable for training the mADALINE neuron.
For the simplest algorithm of training the mADALINE neuron in the course of solving this problem, an algorithm of searching for the maximal existing similarity is proposed.It consists of the following steps.
Step 0. Set the number of iterations k=0 and the maximum permissible error value e(k)=0.1.
Step 5.If e(k)≤1, then go to Step 1. Otherwise, terminate the algorithm.This algorithm will make it possible to find description of the reusable function even if the degree of its similarity to the functional requirement to the created IS is minimal.
Results of the modification of a block diagram of the mADALINE neuron
The results of modification of the mADALINE model represented by expressions ( 5)-( 8) require a change of the block diagram of the given neuron as shown in Fig. 1.This change will determine in the future the main features of implementation of the mADALINE neuron in the form of a software module of the intelligent IT for IS creation or modification for various purposes.
In the course of modification of the mADALINE neuron block diagram, the following features should be considered: a) the library of representations of previously implemented functional requirements to the IS at the level of knowledge 7) should be transferred to the external user only in the case when y j >0.
Taking into account these conditions, the block diagram of the mADALINE neuron will have the form shown in Fig. 2.
Analysis of the practical implementation of the mADALINE neuron
The proposed block diagram of the mADALINE neuron determines the basic approach to the implementation of neurons of this type.This approach involves creation and operation of a special repository of functions available for reuse in subsequent IT projects of creation or upgrade of IS.This approach determines the following method of implementation of the mADALINE blocks shown in Fig. 2: a) the blocks of data sampling from the w j repository, formation of w j and IS f j K are executed using tools of the SQL language or equivalent languages; b) the blocks of calculating the values of the Profit(w j ), y j parameters and the block realizing the algorithm of searching for the maximum existing similarity are executed as software elements.
In general, the providing part of the mADALINE neuron can be implemented in the form of one of the following architectural solutions: a) an independently existing functional module that can be integrated into the IS or IT of automated design of IS and software products for various purposes; b) a separate web-service that can be integrated into existing service-oriented IS to implement the function of searching for web-services that maximally correspond to the given description.
To implement the offered modifications of the mAD-ALINE neuron, it is expedient to use the intellectual IT of accelerated IS development considered in [22].This IT is intended for automation of functioning of the following processes of the IS life cycle: a) the process of determining the requirements of stakeholders; b) the process of requirements analysis; c) the process of designing the IS architecture.
A set of patterns of representation of functional requirements forms the basis of the intelligent IT of accelerated IS development.These patterns make it possible to represent the knowledge extracted from the requirement descriptions in the form of frame networks.
A fragment of the data diagram of the intelligent IT for accelerated IS development used in implementation of the mADALINE neuron is described in [22].
The following tables are used to implement the mADALINE neuron: -APP_REQUIREMENT.Requirement The APP_ATTRIBUTE.Attribute_synonym table data are used in searching for possible synonyms for the names of the frame elements available for reuse.If this synonymous name coincides with the name of the frame element from the description of the Consumer's requirement, then the name of the frame element available for reuse is replaced by a synonymous name.
As a result of this query execution, a cursor is generated.It contains records on the frame elements describing both the Consumer's demand and the next analysis query available for reuse.The number of records in this cursor is the S(w j ) value.The number of records in the cursor formed as a result of executing the same SELECT query with an additional DISTINCT option enables calculation of the W(w j ) value.This query option can also be used in formation of a system-wide representation of the functional requirement to the IS at the level of knowledge, .
IS f j K The S(w j ) and W(wj) values enable calculation of the values of the Profit(w j ) and y j .parameters.These parameters are also used in implementation of the algorithm of searching for the existing maximum similarity.It should be noted that in implementation of this algorithm, the j parameter is the pointer of the current record in the APP_REQUIREMENT.Requirement table and the n parameter is equal to the number of records in this table.
The considered version of the mADALINE neuron implementation makes it possible to fully automate solution of the problem of searching for the reusable function description.All actions in solution of this problem by these means are performed without analyst's participation.
Preliminary experiments were carried out on the application of the mADALINE neuron implementation in solving the problem of searching for the reusable function description.In order to evaluate the effect of using the mADALINE neuron in the course of solving the problem of searching for the reusable function description, experiments were carried out for the following information systems discussed in [22]: a) functional problems of the League of Ukrainian Clubs of Smart Games Register information and analytical system (hereinafter, LUC Register IAS) were considered as a source of reusable functions; b) the Virtual Bulletin Board information system was considered as a source of functional requirements for which search for reusable functions was necessary.
The data on the functional requirements of the LUC Registry IAS and the services implementing these requirements were used as descriptions of reusable functions.In total, records of 15 such requirements were used in the experiments.
As shown in [22], in creation of the Virtual Bulletin Board information system, 6 descriptions of requirements to the LUC Registry IAS IAS based on the fragments of APP_ CORE, APP_USER, APP_PRIVILEGE, APP_PAGE, APP_PUBLICATION, APP_PERSON and APP_GEOG-RAPHY frame networks were reused.The decision on the reuse of the requirement descriptions was made based on the results of applying the method of forming representation of the i-th functional requirement to the IS at a level of the Supplier's knowledge described in [19].This method enables search for descriptions of the functional requirements coinciding with new introduced requirements and already implemented by the information system developer.
As a result of the experiments, application of the mAD-ALINE neuron has made it possible to find reusable descriptions of the same functions of the LUC Register IAS that were recommended with the use of the method of forming the representation of the i-th functional requirement to the IS at the Supplier's knowledge level.The main parameters that make it possible to compare the results of solving the problem of searching for reusable functions by applying the indicated neuron and the method are given in Table 1.
Table 1
The results of experiments on comparing solutions of the problem of searching the reusable functions with application of the mADALINE neuron and the method of forming the representation of the i-th functional requirement to the IS at the level of the Supplier's knowledge
Comparison parameter mADALINE
The method of forming the representation of the i-th functional requirement to the IS at the Supplier's knowledge The problem of search for reusable function of creation of shared publications The time spent by the tool for solving the problem 0.9 s 1.5 s The number of the reusable descriptions found 1 1
Necessity of human participation in the choice of the reusable description No No
The problem of search for the reusable function of the system user's data registration The time spent by the tool for solving the problem 1.3 s 1.7 s The number of the reusable descriptions found 1 2
Necessity of human participation in the choice of the reusable description No No
The problem of search for the reusable function of the system user's location registration The time spent by the tool for solving the problem 1.8 s 2.3 s The number of the reusable descriptions found 1 3
Necessity of human participation in the choice of the reusable description No Yes
The problem of search for the reusable function of system administration The time spent by the tool for solving the problem 0.8 s 1.2 s The number of the reusable descriptions found 1 1
Necessity of human participation in the choice of the reusable description No No
The problem of search for the reusable function of the user's navigation in the system The time spent by the tool for solving the problem 0.9 s 1.3 s The number of the reusable descriptions found 1 1
Necessity of human participation in the choice of the reusable description No No
The problem of search for the reusable function of delimitation of the user's right of the system access The time spent by the tool for solving the problem 0.9 s 1.2 s The number of the reusable descriptions found 1 1
Necessity of human participation in the choice of the reusable description No No
The main disadvantage of the method of forming the representation of the i-th functional requirement to the IS at the level of the Supplier's knowledge is that he finds all possible descriptions of the implemented functional requirements to the LUC Registry IAS in which information from the requirement to the Virtual Bulletin Board information system is present.Therefore, there is a necessity of participation of a person who must solve the problem of choosing one of the descriptions found which will be further recommended for reuse.
The use of the mADALINE neuron to solve this problem makes unnecessary human participation.This becomes possible because during solution of the problem, the mAD-ALINE neuron finds description of the LUC Registry IAS reusable function as much as possible coinciding with the analyzed functional requirement of the Virtual Bulletin Board information system.
To verify behavior of the proposed implementation of the mADALINE neuron on a large number of records of the implemented requirements available for reuse, additional studies are necessary.
Discussion of results of modification of the model and the block diagram of the mADALINE neuron
The offered results of modification of the mADALINE neuron completely exclude the analyst from the process of searching for reusable functions or web-services that satisfy the functional requirement.In accordance with the block diagram of the mADALINE neuron shown in Fig. 2, human participation is only required in preparation of representation U tr i
K
of the Consumer's functional requirement to the created IS at the level of his knowledge.All other operations of search for a reusable function maximally similar to the description of the requirement are performed by the mAD-ALINE neuron in an automatic mode.
The main feature of the proposed modification of the mADALINE neuron is the necessity of a formal description of both reusable functions and the presentation of the functional requirement to the IS.Although a large number of studies are carried out in this direction (e. g. [13][14][15]), the problem of formal description of requirements (including on the knowledge level) is still far from being resolved.
Among shortcomings of the proposed modification, it is necessary to emphasize considerable computational costs for execution of the algorithm of searching for the existing maximum similarity.In the proposed algorithm version, one iteration requires n scans of the repository records ( ) .K , to the IS, the number of records in the repository will increase to 10×n.Therefore, the issues of reducing computational complexity and increasing accuracy of the mADALINE training algorithm are a very promising study direction.
Conclusions
1. Formal descriptions of the input signals, x j , the training signal, d j , the weight factors, wj, and the output analog signal, u j , of the ADALINE neuron were modified.Based on the results obtained, a mathematical model of the adaptive linear associator, formal description of the nonlinear activation function, y j , and the training algorithm for the ADALINE neuron were modified.In the course of modification, the features of solving the problem of searching for the description of a reusable function for implementation of a functional requirement to the IS were taken into account.The proposed solutions make it possible to use the capabilities of the modified mADALINE neuron in processing the representations of functional requirements and reusable functions in the form of frame networks.
2. Modification of the ADALINE neuron was proposed as applied to the peculiarities of solving the problem of searching for the description of a reusable function for implementation of the functional requirements to the IS.
3. An approach to the implementation of the proposed version of modification of the neuron block diagram was described.Two versions of architectural solutions for implementation of the providing part of the mADALINE neuron were proposed.The peculiarities of implementation of the mADALINE neuron as an element of intellectual IT for accelerated IS development were considered.The proposed version of implementation makes it possible to completely exclude the analyst from the process of solution of the problem of searching for the reusable function description.This will reduce the cost of developing or modifying the IS by reducing the labor costs of the analysts who previously participated in the search for the reusable function descriptions to realize functional requirements to the IS.
Fig. 1 [Fig. 1 .
Fig. 1.Block diagram of the ADALINE neuron x j ; b) a formal description of the training signal, d j ; c) a formal description of the weight factors, w j ; d) a formal description of the output analog signal, u j ; e) a model of adaptive linear associator; f) a concrete type of the nonlinear activation function, y j ; g) a concrete type of the training algorithm.
K
as the input signals, x j .It is proposed to consider representation of the i-th functional requirement to the IS at the level of the Consumer's knowledge, lib f j K as a training signal d j .It is proposed to consider a system-wide representation of the j-th functional requirement to the IS at the level of knowledge, , IS f j K as a description of the output analog signal, u j .Formal descriptions of , in the form of frame networks are given in[19].It should be noted that the representations of requirements at the level of knowledge , pluralities of numerical and symbolic data.The patterns of designing these representations are considered in[15].However, the ADALINE neuron can process just numeric data.Conversion of representations, numerical signals requires rather complicated algorithms.
is the number of occurrences of the ob object in the description of the IT service, ; The cost function which makes it possible to estimate the degree of duplication of representations, , is the number of system-wide representations of functional requirements at a level of knowledge , IS f j K describing the IT service, ; j acm IT r is the repulsion factor, 1. r ≥ Then the problem of synthesizing versions of description of the created IS architecture can be formulated as follows: for the given D(Arch base ) and r, find the partition: IT acm : acm , r)Î[Profit max -e; Profit max ], where Profit max is the maximum value of the function ( disposal of the IT service provider.To describe this comparison, transform the cost function(2) to the following form: the above, the parameters of expression (3) take the following values: a) the number of IT services being studied (functions of the created IS) k=1; r=1 (each functional requirement to the IS is implemented as an independent module of the created IS and does not assume any further decomposition during the design of the IS and development of information and software support of the IS).
for descriptions of two or more reusable functions which are as similar as possible to the description of the Consumer's functional requirement , .Then description of the nonlinear activation function, y j , of the mADALINE neuron can be performed based on the Heaviside's unit function.For the problem under consideration, y j will have the following form: k) is the permissible error value at the k-th training step.
in which a finite number of these representations are stored; b) the representation of the j-th Consumer's functional requirement to the IS at the level of knowledge U tr i K enters the mADALINE neuron from the outside; c) the value of u j calculated from formula (6) and the value of ,
Fig. 2 .
Fig. 2. Block diagram of the modified mADALINE neuron the reusable functions do not completely correspond to the representation of the functional requirement , . The set of required descriptions of IT services, , Attribute_in_requirement table which contains data on the elements of the frames used for description of the Consumer's functional requirements, ; table which contains data on implemented functional requirements ( ) , lib f j K available for reuse; -A PP_ R EQUIR EME N T.I mplemented_ atttribute table which contains data on the elements of frames describing the functional requirements available for reuse; -APP_ATTRIBUTE.Attribute table which contains data on the elements of the frames describing all functional requirements; -APP_ATTRIBUTE.Data_type table which contains data on the types of data of the frame elements; -APP_ATTRIBUTE.Attribute_type table which contains data on the types of frame elements; -APP_ATTRIBUTE.Attribute_synonym table which contains data on possible synonyms for the frame element names; -APP_REQUIREMENT_VERS.Analytical_require-ment_version table which contains representations of the Consumer's functional requirements, ; U tr i K -APP_REQUIREMENT_VERS. | 7,519.4 | 2018-06-14T00:00:00.000 | [
"Computer Science"
] |
Digital Literacy Analysis of Elementary School Students Through Implementation of E-Learning Based Learning Management System
aspects, most of the students had sufficient digital literacy skills and there were four aspects where most of the students had good abilities. So it can be concluded that the students' digital literacy abilities are mostly in the sufficient and good category. Thus, e-learning can have an impact on students' digital literacy skills. This is an open access article under the CC BY-SA license. Copyright © 2022 by Author. Published by Universitas Pendidikan Ganesha.
INTRODUCTION
Education is one of the most important parts of a person's life process because this education can lead a person to the maturation process that will be brought into everyday life. The importance of this education has been realized by the community so that many parents have provided early education for their children. Education continues to develop from time to time and is accompanied by the development of science and technology (Tan, 2017;Thorvaldsen & Madsen, 2020). The Covid-19 pandemic that has occurred has had a major impact on human life in various aspects, both in terms of aspects (Menabò et al., 2021). The impact of Covid-19 is the implementation of social and physical distancing, a ban on going home and moving at home. One aspect that is getting the biggest impact due to this pandemic is the education aspect. Conditions like this have an impact on the implementation of education at various levels of education ranging from kindergarten, elementary school, junior high school, high school and university. Various policies in the education sector were issued in the form of circulars, namely: firstly, prevention and handling within the Ministry of Education and Culture, secondly prevention in Education Units, thirdly education policies during the emergency period for the spread of the coronavirus disease-19.
The implementation of face-to-face education in schools has begun to be limited, to tackle the spread of COVID-19. This results in learning not only through limited face-to-face but also through distance learning or what is also commonly referred to as electronic learning (e-learning) (Bubb & Jones, 2020). E-learning is distance or virtual learning that involves technology in it (Yazon et al., 2019), so the implementation of elearning requires good collaboration between teachers and students. E-learning or distance learning was developed as a learning medium that can connect online between educators and students in a virtual classroom without having to physically be in one room (Nahdi & Jatisunda, 2020). Through e-learning, students have the flexibility of learning time and can study anytime, anywhere so that through this learning students can participate in learning even though the student is at home (Ozturk & Ohi, 2018).
E-learning is a learning method that uses an Internet-based interactive model and a Learning Management (LMS) both in formal and informal learning (Mpungose & Khoza, 2020). A Learning management system (LMS) is software that can automate the administration of an activity (Demmans Epp et al., 2020). This is one of the new ways in teaching and learning activities that utilize electronic devices, especially in internet network access (Maria Josephine Arokia Marie, 2021;Saxena et al., 2018). In addition, the LMS used by students can be given a variety of learning resources, multimedia and games that can help students learn (Hobbs & Tuzel, 2017;Hsu et al., 2019;Molina et al., 2018;Rakimahwati & Ardi, 2019;Sukendro et al., 2020). The purpose of holding e-learning is to provide quality learning services in a massive and open network so that it can reach a wider and wider range of enthusiasts (students) (Sofyana & Rozaq, 2019).
The implementation of e-learning requires the readiness of qualified technology from a teacher so that it can be accessed smoothly by students (Shively & Palilonis, 2018). In addition to the aspect of technology that is easily accessible (Delacruz, 2019;Pangrazio et al., 2020), the implementation of e-learning also requires students' digital literacy in its implementation. In the current era, students are a generation who are familiar with technology in their lives since they were born and are commonly referred to as digital natives (Iivari et al., 2020;Porat et al., 2018;Vélez & Zuazua, 2017). So that the implementation of learning in schools must also be adapted to the current development of student life where students will be able to easily get access to information from various digital sources that are abundant through the digital facilities they have (Kerkhoff & Makubuya, 2021;Kurnianingsih et al., 2017). Included in the implementation of e-learning students will be able to easily learn through digital media. This has resulted in the need for students to have digital literacy skills in the digital transformation process (Reichert et al., 2020;Temdee, 2019).
Digital literacy is an awareness of a person's attitudes and abilities to be able to use digital facilities properly in identifying, accessing, managing, evaluating, analyzing and inferring a digital resource, adding new knowledge, creating expressions and communicating with others in living conditions (Noh, 2017). Certain things to be able to allow a constructive social action (Delacruz, 2019;Nahdi & Jatisunda, 2020;Peled, 2021). Digital literacy does not only involve the ability to apply tools such as computers and cell phones, but also skills to adapt to the capabilities and limitations of tools in certain circumstances. So that not only teachers have high digital literacy, but as a student they must also have digital literacy to be able to understand and use information in various forms (text, online video, audio recordings, digital libraries and databases) and from a very wide variety of sources, and accessed through digital tools. Digital literacy is one of the efforts in responding to the challenges of technological development (Radovanović et al., 2020;Vélez et al., 2017).
In addition, with digital literacy students will be able to work critically in absorbing various available information, especially through the implementation of e-learning (Polizzi, 2020). The implementation of elearning really requires students' digital literacy skills, because e-learning is identical to the use of the internet (Binali et al., 2021). While the internet has positive and negative impacts for its users (Techataweewan & Prasertsin, 2018), including elementary school students. For elementary school students, digital literacy skills are an absolute must and must be developed further so that students can use digital devices well and can use information on the internet properly. So that this can minimize the negative impact of the development of science and technology. The low digital literacy skills possessed by students will also have an impact on negative things in digital activities, such as incorrectly concluding information, errors in choosing sources of information and the dissemination of incorrect information (Abdulai et al., 2020). This is what makes it important to have digital literacy skills for digital transformation (Isnawati et al., 2021;Maureen et al., 2018;Radovanović et al., 2020;Temdee, 2019). On this basis, the author sees the need to analyze the digital literacy skills of elementary school students in implementing Learning Management System (LMS)-based e-learning. Previous research explains that the existence of learning facilities greatly affects the level of digital literacy of students. (Kerkhoff & Makubuya, 2021;Radovanović et al., 2020), In addition, the ability of digital literacy of teachers can also affect the quality of technology-based learning provided to students (Reisoğlu & Çebi, 2020). In addition, previous research also explored the ability of digital literacy of teachers (Quaicoe & Pata, 2020), and there has been no research on digital literacy skill especially in elementary school students in the implementation of e-learning so it is necessary to analyze the digital literacy skills of elementary school students. This research aims to analyze students' digital literacy skills through the implementation of e-learning based on learning management systems. It is very interesting to conduct a study on the effectiveness of e-learning in improving students' digital literacy skills. It is also very important to know the obstacles in implementing e-learning, especially in elementary schools. It is hoped that this research can provide a solution to the problem of low digital literacy and related problems in implementing LMS-based e-learning in elementary schools.
METHOD
The type of research used is descriptive analysis. The purpose of this study is to describe the digital literacy of elementary school students who are doing online learning using a learning management system (LMS). The subjects of this study were fifth-grade elementary school students, totaling 25 students. Data were collected through interviews and surveys. Meanwhile, the instrument used in data collection is a digital literacy questionnaire which has been declared valid and reliable after testing the instrument. Data collected through questionnaires include: 1) ability to join LMS class, 2) ability to understand symbols used in LMS, 3) ability to read and understand the information in LMS, 3) ability to read and understand the information in LMS, 4) ability to communicate in LMS, 5) ability to select information from the internet, 6) ability to produce reliable sources of information, 7) ability to think critically in deciphering the information received. The questionnaire used uses a score of 1-5 which is adjusted in Table 1. Data analysis used includes data collection, data reduction, data presentation, and decision making. This research is more focused on students' digital literacy skills in using digital technology during online learning activities that utilize a learning management system (LMS).
Result
The digital literacy ability of elementary school students is obtained by using a questionnaire after learning in a network-based learning management system for fifth grade elementary school students. The achievement of digital literacy skills is divided into 7 components, including the ability to join the LMS class, basic skills in understanding the symbols used in the LMS, the ability to read and understand information, the ability to communicate in the LMS, the ability to produce reliable sources of information, the ability to choose information on the internet, the ability to think critically in deciphering the information received. The results of digital literacy skills after LMS-based e-learning applications can be seen in Table 2. Based on Table 2, it can be seen that the digital literacy abilities of elementary school students after the implementation of e-learning based on a learning management system are mostly in the sufficient category. In the first indicator, namely the ability to join the LMS class, there are 52.11% of students in the sufficient category. This is because before learning to use LMS-based e-learning, the teacher has communicated to students and parents so that teachers can provide tutorials on how to enter the LMS class that is used for learning. However, even though they have been given speech and training in entering the LMS class, there are still many students who have difficulty entering the LMS. Several factors make it difficult for students, including students not paying attention when given the instructions given by the teacher and another factor being network constraints. So that 35.29% of students are under the sufficient category. While the rest, namely 64.71% of students are in the quiet, good and very good categories.
The second indicator is the basic ability to understand the symbols used in the LMS, students who are in the sufficient category are 45.6%. This is because in the LMS many features or symbols that result in students having a little difficulty in understanding the symbol. This can be overcome if students are used to using the LMS and are accustomed to using the features that exist in the LMS. In addition, the teacher also provides various features in learning so that students can take advantage of these features. So that there are 38.62% of students in the category below enough. While the rest, namely 61.38% of students are in the quiet, good and very good categories. The third indicator is the ability to read and understand the information in the LMS, students who are in the good category are 35.51%. This is because the teacher gives clear instructions so that students can easily understand the instructions. In addition, the information presented on the LMS is given in a simple, concise, concise and clear form so that students can easily capture the information provided. The information provided is adjusted to what students need at the time of learning. So that there are 41.1% of students in the category below the sufficient category. While the rest, namely 58.9% of students are in the sufficient, good and very good categories.
The fourth indicator is the ability to communicate in LMS, the highest percentage is in a good category, namely 53.12%. This is because teachers continue to motivate and train students in discussion forums located in the LMS. Every e-learning meeting using the LMS there is always a discussion forum that must be followed by every student in the LMS class. This is so that even though learning is done through LMS, students can still communicate with friends and teachers. So that there are 28.73% of students under the sufficient category. While the remaining 71,27% of students are in the category of quiet, good, and very good. The fifth indicator is the ability to choose information from the internet, the highest percentage, namely 32.95% of students, is in agood category. This is because students have been given information to be careful in choosing sources of information on the internet. Students have been told how to choose responsible sources of information on the internet, for example students are directed to go to the online library owned by the school. So that there are 34.41% of students in the category below enough. While the remaining 65.59% of students are in the category of quiet, good, and very good.
The sixth indicator is the ability to produce reliable sources of information, the highest percentage, namely 50.19% of students, is in a good category. This is because students from the beginning have tried to be able to choose reliable and responsible information. The selected information is then understood and analyzed by students so that students know which information is right and wrong. From the information received, students can finally make new information that can be justified. So that there are 39.56% of students under the sufficient category. While the rest, namely 60.14% of students are in the quite, good, and very good categories. The seventh indicator is the ability to think critically in deciphering the information received, the highest percentage of which is 40.13% of students in the sufficient category. This is because students have difficulty in reasoning critically. Some of the factors behind these difficulties, one of which is that students are not accustomed to doing critical reasoning. The teacher continues to guide students so that students can think critically in solving problems presented by the teacher. So that there are 28.27% of students under the sufficient category. While the rest, namely 71.73% of students are in the sufficient, good, and very good categories.
Discussion
Based on the description above, it shows that the use of e-learning based on a learning management system (LMS) can well provide an increase in the digital literacy skills of elementary school students and can be beneficial in providing 21st century skills to students (Ghomi & Redecker, 2019;Neumann et al., 2017;Sadaf & Gezer, 2020). Based on the seven indicators studied, the 4 highest indicators are in the good category, and the 3 highest indicators are in the sufficient category. Learning in a network-based learning management system (LMS) goes well and can be followed by all students. There are some students who can follow the lesson very well and some students have difficulty. This is because each student has different digital literacy abilities. The results of this study are in line with previous research. Implementing e-learning for student learning can affect students' digital literacy skills and will also affect student behavior in the use of digital information (Anggrasari, 2020;Jessica et al., 2020;Noh, 2017;Pratama et al., 2019). Implementing e-learning, there are several advantages, one of the main advantages is that it gives teachers the freedom to add, change or use more innovative learning platforms and follow student needs (Kong et al., 2017). The implementation of e-learning also allows students to be able to communicate with each other so that this can minimize students' attitudes. But on the other hand there are also obstacles or obstacles in the implementation of e-learning, namely the problem of networks and facilities that are not evenly owned by every student it is influenced by the socioeconomic status of the parents. (Lazonder et al., 2020;Qazi et al., 2020). It is undeniable that learning facilities will also have an impact on the quality of education (Putria et al., 2020). The results of this study are in line with previous research. Implementing e-learning for student learning can affect students' digital literacy skills and will also affect student behavior in the use of digital information.
Digital literacy is something that is very much needed by students today who are very close to technology so that teachers need to provide more experience so that students have good ability in using digital technology (Thorvaldsen & Madsen, 2020). The implementation of LMS-based e-learning can support the development of students' digital literacy skills. In addition to improving students' digital literacy competencies, teachers must first have qualified digital literacy competencies. With teachers having high digital literacy, they will be able to bring and guide students in developing their digital literacy competencies (Blevins, 2018). Digital literacy skills are not only developed in learning activities, but can also be developed through other supporting activities. For teachers, for example, through certain training that can support. For students, especially elementary school students, it involves components that exist in school and outside of school (Moreno-Morilla et al., 2021). In addition, one of the things that is very important in digital literacy is the ability to read digital information (Isnawati et al., 2021) which can be done anywhere. This cannot be separated from the role of parents at home as revealed in previous research (Bubb & Jones, 2020;Kulju & Mäkinen, 2019). Parents can continue to guide and supervise their children in their digital activities. This research provides knowledge of the results of digital literacy analysis of students in the implementation of e-learning and also knows the challenges faced, so that with this challenge will also be found solutions in solving existing challenges. Recommendations for further research in order to develop a way or method that can improve the digital literacy of students, especially elementary school students. This study provides information regarding which digital literacy indicators need to get more attention to improve digital literacy skills. The limitation in this study is that in this study only analyze students' digital literacy skills in the use of e-learning without researchers doing treatment on learning in order to get better digital literacy skills.
Digital literacy is an awareness of a person's attitudes and abilities to be able to use digital facilities properly in identifying, accessing, managing, evaluating, analyzing and inferring a digital resource, adding new knowledge, creating expressions and communicating with others in living conditions (Noh, 2017). Certain things to be able to allow a constructive social action (Delacruz, 2019;Nahdi & Jatisunda, 2020;Peled, 2021). Digital literacy does not only involve the ability to apply tools such as computers and cell phones, but also skills to adapt to the capabilities and limitations of tools in certain circumstances. So that not only teachers have high digital literacy, but as a student they must also have digital literacy to be able to understand and use information in various forms (text, online video, audio recordings, digital libraries and databases) and from a very wide variety of sources, and accessed through digital tools. Digital literacy is one of the efforts in responding to the challenges of technological development (Radovanović et al., 2020;Vélez et al., 2017).
CONCLUSION
The results of this study indicate that improving the digital literacy skills of elementary school students is something important and needs to be developed. The development of students' digital literacy skills can be done through the implementation of e-learning based on a learning management system. However, the development of digital literacy skills for elementary school students can not only be done through academic activities at school but can also be done in other supporting activities. So that elementary school students not only need guidance and supervision from teachers but also need the role of parents. Meanwhile, based on the results of the survey, it was found that overall elementary school students have basic skills in using an internet connection, besides that they can create and select information from the internet. | 4,751.6 | 2022-04-18T00:00:00.000 | [
"Education",
"Computer Science"
] |
Analytical Model of the Slotless Double-Sided Axial Flux Permanent- Magnet Brushless Machines
Analytical approaches, if possible, are suggested for saving the simulation time in the design stage of the electrical machines. This benefit is highlighted when the optimization issues including too many iterations are desired. Hence, this paper presents a 2-D analytical model for magnetic field distribution based on the sub-domain method in a slotless double-sided axial flux permanent-magnet (PM) brushless machines (AFPMBMs) with internal-rotor-external-stators. According to this method, the machine cross-section is divided into the appropriate number of sub-regions and the related partial differential equations (PDEs) extracted from Maxwell equations are formed for magnetic vector potential in each sub-region. Applying curl on the obtained results leads to calculating the magnetic flux density components in each sub-region. Based on the superposition theorem, the analytical procedure is utilized in the two separate steps where in the first step the magnetic flux is originated by only PMs with various magnetization patterns (i.e., parallel, ideal Halbach, 2-segment Halbach and bar magnet in shifting magnetization patterns) and the armature currents are zero. In the second step, all PMs are inactivated and only armature currents affect the magnetic flux distribution. Finally, the obtained analytical results are compared with those of the Finite element method (FEM) to confirm the accuracy of the proposed analytical model. The extracted results reveal the benefit of the analytical model for replacing instead of the FEM to predict the magnetic flux density component in the presented AFPMBMs in a shorter time.
Introduction
At the moment, electric machines' presence and their vast kinds of applications in industries are undeniable. Among them, axial flux permanent magnet machines (AFPMs) are more preferred on compactable machines with high torque/weight ratio. From the perspective of structure, we can divide them into single-sided, double-sided and multi-stages [1]. The double-sided structure includes external-rotor-internal-stator (TORUS) and internalrotor-external-stator (AFIR) in terms of rotor and stator positions and each one has its own advantages. Extensive researches have been carried out over these structures that can be classified as [2]: 1) Analytical or numerical models (0-D, 1-D, 2-D, 3-D).
2) PMs configurations such as surface-mounted, surface-inset, buried PMs, spoke PMs, etc.; 3) Magnetization patterns such as parallel, ideal Halbach, 2-segment Halbach, bar magnets in shifting directions, patterns, etc. 4) Magnetic fields quotations formulated in Cartesian, cylindrical or polar coordinates; 5) Machines' structure such as slotted, slotless, coreless, etc. 6) PM shapes such as rectangular, trapezoidal or circular. 7) Considering or not considering the saturation effect. 8) Magnetic field calculations based on PMs, the armature reaction or both of them. Analytical and numerical models were developed for analyzing the electrical machines. For instance, In [3], the authors tried to analyze the 3-D magnetic field distribution by using the Fast Fourier Transform (FFT) where the 3-D analytical model including more complexity compared with other analytical models. Authors in [4] mentioned that in the finite element method, a large number of air areas considered surround the conductors in order to satisfy the boundary conditions at infinity. They developed a hybrid finite element/boundary element (FE/BE) method to avoid such areas aiming to decrease magnetic field calculation. In [5] calculation of 3-D magnetic fields by using integral transformation has been developed. The scalar magnetic potential is obtainable considering the discrete Fourier transformation and the Hankel transformation over angle coordinate and radial coordinate, respectively. In [6], a 3-D magnetic field solution describes an enhanced three dimensional (3-D) field reconstruction method for modeling an axial flux permanent magnet machine.
As mentioned, magnetic field computation in an electric machine may be achieved using the analytical and numerical models . For instance, the analytical models were investigated in [8-19, 21-27, 29, 32-35, 37, 40, 44-48, 50-53] and the numerical ones were explained in [7], [36], [39], [41], [42], [49]. The main drawback of the numerical method is related to their high computational burden and more simulation time. So, some novel methods such as image method [20], [38] and field reconstruction method [28], [30]- [31], [49] aiming to reduce the amount of calculation of FEM method, have been developed in the previous studies. However in [20], the authors claim that their results are more realistic than the quasi-3-D method, but no comparison is reported. In quasi-3-D method [11], [12], [14], [16], [29], [37], [40] it is commonly assumed that the machine is composed of several linear machines. Several cutting planes are chosen and analyzed from the machine and the total results of the machine are obtained by adding the extracted results of each plane. In [11] and [12] the analytical methods are studied for TORUS-type, in cylindrical coordinate and for single-sided structure with Halbach and axially magnets arrangement, in polar coordinate, respectively that the benefits of the analytical model for saving time compared with the numerical ones were described. In [14] and [16] authors investigated analytical methods for single-sided slotted stator structure in cylindrical coordinate. In [29], Parviainen et al. obtained flux density distribution in the air gap region considering stator slot openings. The analytical method is studied for a slotted stator AFIR-type PM machine. Alipour et al. [37], employed Schwartz-Christoffel transformation in order to calculate circumferential and perpendicular components of the air gap flux density due to the PMs and the armature current. Tiegna et al. [40], considered and extended a function that reveals radial dependence of the magnetic field and it is applicable for any type of PMs. It is been framed by composing FEM and multi-slice AM methods which are able to consider the end effects of the machine. 2-D analytical method also investigated in [8], [10], [15], [17], [19], [21], [24], [26], [33][34][35], [44][45][46][47][48], [50], [53]. In [8], the authors investigated a 2-D analytical method for magnetic field calculations in PMs and air gap regions for a single-sided structure in cylindrical coordinate [10], Cartesian coordinate [54] and for a double-sided slotted stator in [15]. The 2-D analytical method also was described for slotted stator TORUS-type [19,24], for calculation of cogging torque and EMF [21]. Also, this analytical model is employed for calculating torque, EMF and inductances for a slotted stator with surface-inset PMs [26]. Besides, the 2-D analytical method also investigated different kinds of machines such as radial flux [33], linear machine [35], flux switching machine with double-sided structure [43] and single-sided structure with two flux return plates [44]. Zhu et al. [34], studied an accurate sub-domain model for a slotted stator structure with radial and parallel magnetization patterns in polar coordinate.
The considered papers reveal that the analytical models, if possible, are preferred due to the following three reasons: 1) The analytical models are faster than the numerical ones which are essential for the optimization issues with numerous iterations.
2) The analytical method provides a better understanding of the system. It helps to comprehend governing equations in the electrical machines.
3) The analytical model is more flexible for modifying motor specifications, such as the dimensions of motor or the number of PMs, in spite of the numerical methods which changing the specifications requires remodeling the machine.
It is necessary to mention that, some authors of the analytical models assumed infinite permeability for cores to simplify the analytical model that leads to having no sense about the magnetic flux density distribution in the cores [7], [22], [33]. Also, some authors investigated just PM effects on the magnetic field distribution and the effect of armature currents was not considered [49], [52], or the authors analyzed the effect of armature currents and the PM effects were not described [35], [55]. Moreover, there is no explanation of various magnetization patterns in most of the previous researches. Therefore, finding an accurate analytical model for the electrical machine with finite permeability of cores is essential to predict the magnetic flux density due to PMs with various magnetization patterns and armature currents.
The main contribution of this paper is related to defining an analytical model for the slotless AFPMBM under the study. In this study, the permeability of cores is assumed to be finite and both armature currents and PMs effect including various magnetization patterns, such as parallel, ideal Halbach, 2-segment Halbach and bar magnet in shifting magnetization patterns, are described to obtain the magnetic flux density components in each sub-region by employing the sub-domain technique and applying Maxwell equations.
Methodology
The electromagnetic problem initiates by invoking a set of assumptions to enable the analytical solution of the governing partial differential equations (PDEs) originating from Maxwell's equations. Magnetization patterns are expressed in terms of their Fourier series expansion.
In this paper, the formulation is based on the magnetic vector potential, which leads to a set of Laplace and Poisson equations. Based on the governing equations and a set of boundary conditions, a general solution is assigned to each region [7]. The geometry of the presented slotless AFPMBM under the study, which consists of eleven sub-regions, is shown in Fig. 1. Noted that r, θ and z in polar coordinate are replaced respectively with z, x and y in Cartesian coordinate to model the motor under the study in the Cartesian coordinate. Also, this assumption leads to replacing the redial magnetization pattern with the parallel one. 3) All materials are isotropic. 4) The media have finite relative permeability. 5) The saturation effects are neglected. 6) The motor has a slotless stator structure. 7) Eddy current reaction field is neglected.
Governing PDEs
The related PDEs in all sub-region of the presented slotless AFPMBM can be defined as follow: The above equation is employed into 2 separate steps for calculating the magnetic flux density vector in each sub-region. In the first step, the magnetic flux density is obtained by only PMs and armature currents are zero and in the second step, the magnetic flux density is originated due to only armature reaction and all PMs are inactivated.
Magnetic flux density due to only PMs
In this step, only PMs affect the magnetic flux density distribution in all sub-regions that leads to extracting two categories of the PDEs. The first group pertinent to all sub-regions except for PMs and the related PDEs in these sub-regions are Laplace equations that are defined as follows: i={pe, ps, pw, pa, r, sa, sw, ss, se} Solving the following PDEs leads to predict the magnetic vector potential in each sub-region of this group as follows: where = / in which is pole pitch.
The second group of equations is related to PMs sub-regions that the relevant equations are Poisson ones described as follows: where 0 is free space permeability. Noted that in the sub-domain technique the magnetization patterns are defined based on their Fourier series expansions to calculate the tangential components, , and normal components, , as follow: where and are Fourier series components. The amplitude of these components for the investigated magnetization patterns and their illustrative representation are described in Table 1 and Fig. 2, respectively. According to the defined Maxwell equations in PMs sub-region and the magnetization pattern components, the following equation can be extracted for PMs sub-regions.
Applying curl on the determined magnetic vector potential in each sub-region leads to obtaining the magnetic flux density as follows: For considering the rotor rotation x must be substituted by − which is the rotor motion and described as: where is converted rotor angular velocity ( ) to the linear translation speed, t is time and 0 is initial rotor position. Bar magnets in shifting directions and are respectively x and y-direction magnetized PM width to the pole pitch ratio for 2-segment Halbach.
Magnetic flux density due to only armature currents
In this step, PMs should be inactive to estimate the effects of armature currents on the magnetic flux density distribution. Two categories for the PDEs Like the previous step are formed in this step. The first group includes all sub-regions except for windings sub-regions where the related PDEs for this group is Laplace one that is described as follows: i={pe, ps, pa, ppm, r, spm, sa,, ss, se} Solving the above equations leads to obtaining similar magnetic flux density expression to Eqs. (9)-(10). The second group of this step comprises the windings that the explained PDEs in this group is Poisson one as follows: The Fourier series expansion of armature currents density is necessary to solve the above equation. For this aim, armature current density is expressed as follow: where 1n J and 2n J are Furrier series components of current density. These components for three phases motor are calculated as follow: ) + cos( ) + sin( )).
where is the number of turns for each coil where excited by the following currents: where is the maximum phase current. Therefore, the related solution of the magnetic vector potential in the windings sub-regions is determined as follows: Similar to the previous section, applying curl on the obtained magnetic vector potential leads to calculating the magnetic flux density components.
The boundary conditions
The calculated magnetic vector potential extracted from Maxwell equations in each sub-region includes 4 variables. Therefore, 44 variables are available and it is necessary to form 44 equation to predict the magnetic vector potential in the slotless AFPMBM under the study. Noted that, according to the geometry structure and applied coordinate system, some coefficients (i.e. , , , ) must be zero and 40 variables are extracted based on the defined PDEs in each sub-region. These 40 variables are: For calculating these variables the magnetic boundary conditions are employed. These boundary conditions are extracted based on the following equations for the motor under the study: where H is magnetic field intensity vector. All these extracted 40 boundary conditions are described in Appendix.
Results and discussion
To validate the derived analytical expressions, the magnetic field distributions in each sub-region of the presented AFPMBM are compared with those of FEM to describe the accuracy of the proposed analytical model. For this aim, the motor including the specifications in Table 2 is employed and the extracted results are developed for both analytical and numerical models. The analytical and numerical results of magnetic flux density distribution due to armature currents and PMs in the motor under the study are respectively represented in Figs 3 and 4 where acceptable accuracy between both analytical and numerical models can be observed.
Noted that the facing PMs on both sides of the presented motor with parallel magnetization patterns play an important role in magnetic flux in the rotor. In this study, facing PMs on both sides are magnetized in a different direction which leads to reducing the tangential components of magnetic flux in the rotor. Also, the extracted results reveal that the magnetic flux passes through the PMs in the Halbach magnetization pattern. Therefore, in the investigated magnetization patterns it is possible to eliminate the rotor and replacing it by the material which is lighter or cheaper.
The slotless stator structure of the studied motor results in increasing the magnetic air-gap and the magnetic flux density in originated due to PMs and armature currents have no significant effects on magnetic flux density distribution. Also, the distributed magnetic flux due to ideal Halbach magnetization patterns includes the minimum total harmonic distortion compared with those of other magnetization patterns.
The simulation time has been mentioned as the main benefit of the proposed analytical model. In this study, a computer with 32-GB RAM and TM i7-7700 Processor was employed and the maximum length of 4 mm in each mesh was considered for the FEM model. In this condition, the analytical model simulation time was 11 times less than the numerical model for the studied motor. It means that applying the analytical model in the design stage of the presented slotless AFPMBM results in saving a considerable amount of time.
Conclusions
An exact 2-D analytical sub-domain model for calculating the magnetic fields of a slotless AFPMBM is proposed in this paper based on the sub-domain method. The analytical procedure has been done in two separate steps including the effects of PMs and armature currents, respectively. Various magnetization patterns such as parallel, ideal Halbach, 2-segment Halbach and bar magnet in shifting direction have been considered to investigate the effects of PMs on the magnetic flux density distribution in each sub-region. The FEM is utilized to confirm the accuracy of the proposed analytical model and the extracted results reveal a suitable agreement between both analytical and numerical models. The benefits of the analytical models are described where less computational time of the analytical models was determined by employing a straightforward comparison of analytical and numerical simulation time and the analytical model was 11 times faster than the FEM.
Appendix
Imposing the boundary conditions between two adjacent sub-regions based on continuity of the tangential components of the magnetic field intensity and normal components of magnetic flux density are formed as follows: | 3,900.4 | 2020-06-15T00:00:00.000 | [
"Physics"
] |
Understanding photosynthetic light-harvesting : a bottom up theoretical approach
We discuss a bottom up approach for modeling photosynthetic light-harvesting. Methods are reviewed for a full structure-based parameterization of the Hamiltonian of pigment–protein complexes (PPCs). These parameters comprise (i) the local transition energies of the pigments in their binding sites in the protein, the site energies; (ii) the couplings between optical transitions of the pigments, the excitonic couplings; and (iii) the spectral density characterizing the dynamic modulation of pigment transition energies and excitonic couplings by protein vibrations. Starting with quantum mechanics perturbation theory, we provide a microscopic foundation for the standard PPC Hamiltonian and relate the expressions obtained for its matrix elements to quantities that can be calculated with classical molecular mechanics/electrostatics approaches including the whole PPC in atomic detail and using charge and transition densities obtained with quantum chemical calculations on the isolated building blocks of the PPC. In the second part of this perspective, the Hamiltonian is utilized to describe the quantum dynamics of excitons. Situations are discussed that differ in the relative strength of excitonic and exciton-vibrational coupling. The predictive power of the approaches is demonstrated in application to different PPCs, and challenges for future work are outlined.
Introduction
The investigation of primary reactions of photosynthesis 1 is an exciting research topic for many reasons.First of all, these reactions provide the basis of our life on earth, which relies on the conversion and storage of solar energy.Second, we know Institut fu ¨r Theoretische Physik, Johannes Kepler Universita ¨t Linz, Altenberger Str.69, A-4040 Linz, Austria.E-mail: thomas.renger<EMAIL_ADDRESS>
Frank Mu ¨h
Frank Mu ¨h studied chemistry at the Technical University of Berlin, where he received his PhD in Physical Chemistry in 1999.The quest for a theoretical underpinning of spectroscopic data led him to join the group of Thomas Renger in Berlin and Linz, where he is primarily working on structure-function relationships of photosynthetic systems.His further research interests range from solvent effects via the thermodynamics of self-assembly and crystallization to fundamental problems of condensed matter physics.
the molecular structures (e.g.ref. 2-4 and references therein) and the photophysical properties 5 of the key proteins, involved in these reactions, in great detail.Third, the complexity of these systems on the one hand requires theory for a structurebased interpretation of experimental results. 6,7On the other hand, it is a challenge for theory itself 6,8 to find the right approximations that still allow us to describe the reactions and to draw conclusions about structure-function relationships.
In this perspective, we will focus on the light-harvesting process in photosynthetic pigment-protein complexes (PPCs) and discuss bottom up theoretical approaches that shall be used to understand building principles realized by Nature in different systems for efficient light-harvesting.Two important interactions that need to be described are the pigment-pigment and the pigment-protein coupling.In most PPCs, the distances between atoms of different pigments are large enough, so that pigments do not exchange electrons.Nevertheless, the excited state wavefunctions of PPCs are often delocalized due to the Coulomb coupling between optical transitions of the pigments, the excitonic coupling.This coupling allows for a non-radiative transfer of excitation energy between different pigments.By dynamically modulating the transition energies of the pigments and their excitonic couplings, the protein introduces a dissipative element that allows for a directional energy transfer to the low-energy exciton states, which may still be delocalized over a certain number of pigments.The spatial location of these states is also controlled by the protein environment, for example, by tuning the average optical transition energies of the individual pigments, the site energies.
The description of the dynamics of interacting electrons and nuclei after optical excitation towards quasi-equilibrium is a complicated many-body problem.In the spirit of the Born-Oppenheimer approximation, 9 the mass differences between electrons and nuclei can be used to introduce potential energy surfaces (PES) that govern the motion of nuclei in different electronic states of the PPC.If appropriate PES have been defined and nuclear relaxation in these PES is fast, excitation energy transfer can be described by using the weak inter-PES coupling as a perturbation and assuming a thermally relaxed initial state of nuclei.0][11] If, on the other hand, the excitonic coupling is strong, any nuclear reorganization, occurring upon optical excitation or excitation energy transfer, may be neglected, nuclei stay relaxed and provide a dissipative environment for the excitons during relaxation between different delocalized states, as described by Redfield theory (e.g.ref. 12 and references therein).If the excitonic coupling between pigments and the modulation of the pigments' transition energies by the protein are of similar strength, more advanced theories are needed.In Modified Redfield theory, [13][14][15] the nuclear reorganization after optical excitation of a delocalized exciton state is taken into account by introducing excitonic PES.If relaxation in these PES is fast compared to exciton transfer, a rate constant can be defined.In photosynthetic complexes, often pigments can be divided into domains with strong intradomain excitonic couplings and weak excitonic couplings between pigments in different domains.7][18][19][20] If, in a domain of strongly coupled pigments, the coupling between different excitonic PES and the corresponding exciton-vibrational reorganization energy are of equal magnitude, there is no small parameter, and the exciton-vibrational quantum dynamics should be described using non-perturbative approaches, [21][22][23][24][25][26][27][28] which are, however, numerically costly.
Besides the development of a dynamic theory necessary for a description of the quantum-dissipative motion of excitons, the PPC Hamiltonian has to be parameterized by structure-based simulations.Three classes of parameters need to be determined: (i) the excitonic couplings between pigments, (ii) the site energies of the pigments and (iii) the spectral density of the exciton-vibrational coupling, describing the modulation of (i) and (ii) by the protein dynamics.
6][47][48] These methods differ in the way they account for the electronic and nuclear polarization as well as the charge density of the environment of a pigment.The most detailed description of the protein environment is obtained with QM/QM and QM/MM approaches.In QM/QM, the PPC is divided into subsystems (e.g., pigments and protein parts) that all are described with quantum chemistry (QC, which in this context is a synonym for QM).Approximations occur in the treatment of subsystem couplings and an eventually limited number of subsystems that can be considered. 3,29In QM/MM, the environment is described by classical (i.e., non-quantum) molecular mechanics.Here, one has to distinguish two basic strategies.In the actual QM/MM framework, the pigment is treated with QC, while the dynamics of the environment is obtained from a classical molecular dynamics (MD) simulation. 49,50Application of this procedure to a PPC with many pigments is cumbersome, however, as a separate simulation has to be performed for each pigment being exclusively the QC part.1][32][33][34][35][36][37][38] A major drawback of the latter strategy is the mismatch between the pigment geometries generated by MD and the optimal QC geometry. 36,51In any QM/MM approach, the nuclear polarization is described explicitly by propagating the classical equations of motion for the atoms and the electronic polarization is taken into account either implicitly or explicitly by using a polarizable force field. 38,52The charge density of the protein is included in the QC calculation in the form of classical background charges.The latter type of QC is also performed in the QC/Back approaches, but on a single geometry that is meant to represent the average equilibrium structure of the PPC in the electronic ground state. 43,44][41][42] A critical point in the semi-classical approaches (b)-(d) is the neglect of the Pauli repulsion between electrons of the pigment and the protein due to the classical treatment of the latter.The electrons tend to be solvated by the classical dielectric or attracted by positive classical point charges and, therefore, a distortion of the electronic wavefunction may result, referred to as the electron-leakage problem. 3,53,54In the alternative QC/E2 approaches, [45][46][47][48] first QC calculations are performed on the pigment in vacuo and the electrostatic potentials of the charge and transition densities are fitted by atomic partial charges.These charges are used in a second step in classical electrostatics calculations including the polarizability of the environment.In this way, leakage artifacts are avoided, however, at the price of neglecting all changes of the charge and transition densities due to the pigment-protein interaction.We note that in all approaches, one encounters the problem of the limited accuracy of QC resulting from the necessarily approximate treatment of the many-electron system.
In the light of the complexity of the problem, an evaluation of the dynamic theories and parameter calculation schemes by testing with experimental data is needed.4][45][46][47][48] Concerning the spectral density of exciton-vibrational coupling, linenarrowing spectra [55][56][57][58] provide valuable information to compare with ref. 33, 37, 51 and 59.Exciton dynamics is investigated by nonlinear time-resolved spectroscopy.Much of our current knowledge about time scales of exciton transfer and relaxation was obtained from pump-probe spectroscopy. 6,73][64][65] In particular, long-lived quantum beats detected with this technique in PPCs at low temperature, 61 and even at room temperature 66 and for weakly coupled pigments, 67 have raised the attention of researchers from different fields 68 and of the general public. 69his perspective is organized in the following way.We start with defining the standard PPC Hamiltonian that contains the minimal ingredients for the study of the quantum dynamics of excitons and point out the underlying assumptions and approximations.In the next section, we provide a microscopic foundation for this Hamiltonian and discuss its parameterization by structure-based microscopic theory with a focus on QC/E2 approaches, since the latter so far have led to the best agreement with experimental data.Afterwards, the PPC Hamiltonian is used to study the interplay of excitonic and vibrational motion for different relative strengths of excitonic and excitonvibrational coupling.Finally, we discuss applications of the theory and calculation schemes to simulate light-harvesting in the Fenna-Matthews-Olson (FMO) protein from green sulfur bacteria, the major light-harvesting complex of photosystem II of higher plants (LHCII), cyanobacterial photosystem I and photosystem II.
Hamiltonian of the pigment-protein complex
The standard Hamiltonian of a PPC used to study energy transfer contains three parts ( The exciton Hamiltonian H ex , expanded with respect to localized excited states |mi and |ni of the PPC, reads where the exciton matrix H (0) mn contains the local excitation energies E m of the pigments in the diagonal and the interpigment excitonic couplings V mn in the off-diagonal parts and the superscript (0) indicates that these quantities are taken at the equilibrium position of nuclei in the electronic ground state of the PPC.In terms of molecular wavefunctions of the pigments, the localized excited state |mi is given as the product of the excited state wavefunction j (e) m of pigment m and ground state wavefunctions j (g) k .The excitation energy E m corresponds to the energy at which pigment m in its local binding site in the protein would absorb light, if its optical transition was not coupled by V mn to the optical transitions of the other pigments n.Below, we will use microscopic theory to derive expressions for E m and V mn .
The exciton-vibrational coupling Hamiltonian H ex-vib describes the modulation of site energies and excitonic couplings by the vibrational dynamics of the complex.It is assumed that the exciton parameters depend linearly on the displacements of nuclei from their equilibrium position, that is where g x (m,n) and Q x are dimensionless coupling constants and vibrational coordinates, respectively, and ho x is the energy of vibrational quanta in mode x.A normal mode analysis (NMA) will be used below to provide a microscopic foundation for this Hamiltonian.We note that Q x is related to creation and annihilation operators C † x and C x , respectively, of vibrational quanta by 9 x .Rate constants for exciton transfer as well as lineshape functions of optical transitions are related in the second part of this perspective to the spectral density of exciton vibrational coupling containing the coupling constants g x (m,n), introduced in eqn (4), describing the fluctuations of site energies (m = n) and excitonic couplings (m a n).
In the spirit of a NMA, the nuclear dynamics is described by a Hamiltonian of uncoupled harmonic oscillators with the dimensionless
Underlying assumptions and applicability
Concerning the electronic Hamiltonian H ex it is assumed that electron exchange between different pigments is negligible.In other words, the electrons stay at their pigments and change their local quantum state by optical excitation and excitation energy transfer.In most PPCs, the interpigment distances are large enough to prevent considerable wavefunction overlap.The presence of the latter would be a prerequisite for electron exchange.Notable exceptions are the special pairs in the photosynthetic reaction centers 70 and a few long-wavelength absorbing chlorophylls in photosystem I. 71 In the latter case, there exits, however, methods to treat electron exchange in an effective way by including short-range contributions to site energy shifts and excitonic couplings in the electronic Hamiltonian in eqn (2).One way to obtain these short-range contributions is by relating monomer and dimer QC calculations using an effective Hamiltonian of the type in eqn (2). 72,73In this way, it became clear that 80% of the excitonic coupling in the special pairs of purple bacteria and photosystem I is of the short-range type and that the site energies are also considerably red-shifted by electron exchange. 73oncerning the assumed linear coordinate dependence of the exciton-vibrational coupling and the harmonic oscillator form of the vibrational Hamiltonian in eqn ( 4) and ( 6), a recent NMA and a comparison of the resulting spectral density with experimental data showed that this Hamiltonian can be justified qualitatively by microscopic calculations. 51There are, however, some quantitative deviations that most likely result from anharmonicities experienced by the soft degrees of freedom that govern the conformational flexibility of the macromolecule.However, it is known that even a strongly anharmonic system, in the spirit of a second-order cumulant expansion, can be described by an effective spectral density of harmonic oscillators. 74n summary, we may conclude that the present Hamiltonian provides a description of a wide class of systems and that the two missing aspects, namely, electron exchange and anharmonic vibrational motion, may be included in an effective way by adjusting the system parameters.
Microscopic foundation of the PPC Hamiltonian and parameterization
In terms of elementary quantum mechanics/chemistry, the electrons of the pigments move in the Coulomb field of the other electrons and the nuclei of the PPC.Since the pigments in most PPCs are non-covalently bound to the protein and the interpigment distances are large enough, electron exchange between pigments and between pigments and the protein can be neglected to a good approximation.Hence, a Hartree ansatz can be chosen for the electronic states of the PPC, where the individual building blocks are the pigments and the protein chains and residues, in which electrons delocalize.In principle, the wavefunctions of the electronic ground and excited states of these building blocks, when they are isolated from each other, can be obtained from quantum chemical calculations.We want to see in the following, how the optical properties of the pigments change when the Coulomb coupling between these building blocks is switched on.For this purpose, we consider one pigment m with wavefunction |A (m) a i (a = 0, 1, 2, 3,. ..),where the index a counts the electronic states of the pigment, which is surrounded by the N À 1 remaining building blocks of the PPC.The latter comprise the other cofactors and different parts of the protein of the PPC described by wavefunctions |B (Z) b i, where Z = 1 . . .
The index 0 = 0, 0,. ..,0 denotes the state, where all environmental building blocks are in their electronic ground state, and the prime at the sum indicates that c and b should not be simultaneously a and 0, respectively, that is, the sum includes only off-diagonal matrix elements of the Coulomb coupling.
For the convergence of the perturbation theory, it is required that the denominator in the sum be sufficiently large.If the environment is in the electronic ground state, i.e., for b = 0, we have c a a and, therefore, a large electronic transition energy of the pigment in the denominator.For c = a we have b a 0 and a large electronic transition energy of the environmental building blocks appears in the denominator.Finally, we investigate the case b a 0 and c a a.We are mainly interested in the shifts of the ground (a = 0, S 0 ) and first excited (a = 1, S 1 ) states of pigment m, since these are the states that are predominantly involved in energy transfer (due to fast internal conversion between higher excited and the first excited states of the pigments).For a = 0, the denominator in eqn ( 7) is again large, since it contains now the sum of electronic excitation energies of m and of the building blocks.The only critical case is obtained for a = 1.If c = 0, the negative electronic transition energy E (m) 0 À E (m) 1 of pigment m may be compensated by the electronic transition energy F b À F 0 of the building blocks.For such a compensation, a single excitation on another building block is required with transition energy F (Z) b À F (Z) 0 .If Z corresponds to a protein part, the denominator in eqn ( 7) is still large, since the protein starts to absorb light at much higher energies than the pigments.However, if the building block Z is another pigment n, the denominator may even vanish for resonant S 0 -S 1 transitions of the two pigments.Therefore, we have to exclude those particular states, where one pigment building block is in its first excited state and the remaining are in their ground state.The respective Coulomb coupling hw (m) 10 |V|w (n) 0b i is, instead, explicitly included in the exciton Hamiltonian in eqn (3) as excitonic coupling V mn and, in this way, can be treated non-perturbatively.Evaluating the Coulomb coupling V in this matrix element gives It describes the Coulomb coupling between the transition densities of the S 0 -S 1 transitions of pigments m and n, as described in more detail below.
Site energies
The remaining parts of eqn (7) are used in the following to derive a microscopic expression for the site energy E m in eqn (3).The site energy of the optical transition between the ground and the first excited state of pigment m then follows as where E 0 is the S 0 -S 1 transition energy of the isolated pigment and DE (m) 1 and DE (m) 0 are the shifts of the S 1 and S 0 state energies, respectively, of this pigment that result from the Coulomb coupling between the building blocks in the PPC, excluding the excitonic couplings with the S 0 -S 1 (0 -1) transitions of the other pigments as discussed above.
By evaluating the Coulomb matrix elements in eqn (7), we obtain for the shift of the electronic energy of state a = 0, 1 of pigment m where the first term on the r.h.s.denotes the Coulomb coupling between the charge densities of pigment m in electronic state a and building block Z in the electronic ground state, the second line is the Coulomb coupling between the charge densities of building blocks Z and Z 0 in their electronic ground states, the third line contains the Coulomb coupling between the charge density of pigment m in electronic state a and the transition density of the 0b transition of building block Z and the last line is the Coulomb coupling between the latter transition density and the transition density of the ac transition of pigment m.Note that the first-order terms involving couplings between environmental building blocks Z and Z 0 cancel out in the calculation of differences DE (m) a À DE (m) b .A rigorous examination of matrix elements of the type occurring in eqn ( 8) and (10) was presented in ref. 75, where these matrix elements were related to Coulomb interactions involving charge and transition densities, starting from a manyelectron wavefunction.As shown there also, a matrix element of the type can be described by the Coulomb interaction between atomic partial charges q (m) I (a,c) and q (Z) J (0,b) that are placed at the positions R (m) I and R (Z) J of the Ith atom of pigment m and Jth atom of building block Z.These charges are determined from a fit of the electrostatic potential of the respective transition or charge densities calculated with QC on the isolated building blocks.In the following, we use these partial charges to obtain insight into the physical origin of the third term on the r.h.s. of eqn (10), which we denote with W a .If a dipole approximation is adopted for building block B Z and P J q ðZÞ J ð0; bÞ ¼ 0 is used, the matrix elements in W a become where the polarizability tensor 76 âZ of building block Z with of the partial charge q ðmÞ I 0 ða; aÞ induces a dipole moment p Z = âZ E at molecule (building block) Z, and the field of this dipole moment interacts with the partial charge q (m) I (a,a) of the pigment.Hence, W a is the solvation energy of the permanent charge density of electronic state a of pigment m.
In a similar way, the last term on the r.h.s. of eqn (10) can be related to solvation energies of transition densities for all electronic transitions that start from state a. 77 Physically, these terms represent the London dispersive interactions of pigment m in electronic state a with its environment for b a 0 and an inductive effect of the environment on the pigment for b = 0.
3.1.1.The CDC method.In the charge density coupling (CDC) method, [46][47][48] only the first order shifts in eqn (10) are explicitly considered for the calculation of the site energy in eqn (9).The higher order terms arising from polarization are included implicitly by scaling the Coulomb coupling by an inverse effective dielectric constant e eff À1 .The site energy E m of pigment m then is obtained as As noted above, the partial charges q (m) I (0,0) and q (m) I (1,1) of the ground and the excited state of the pigment, respectively, are based on in vacuo QC calculations on the pigments and a fit of the electrostatic potential. 75he ground state partial charges q (Z) J (0,0) of the background, formed by the remaining atoms of the PPC, comprise those of the other pigments and those of the protein.The latter are taken from standard molecular mechanics (MM) force fields (e.g.CHARMM 78,79 ).Since the protein contains residues with variable charge density, the titratable groups, the protonation pattern has to be determined before the site energy shifts can be calculated.This problem is discussed in the following.
3.1.2.The protonation probabilities of titratable residues of the protein.Every protein has titratable groups, i.e. groups that can release a proton.Besides the N-and C-terminus of each polypeptide chain (if not chemically modified), a number of amino acid side chains are considered titratable including those of the acidic amino acid residues Asp and Glu, the basic residues Arg, Lys, and His as well as Tyr and Cys.Deprotonation of any of these sites in the protein (and subsequent release of the proton into the outer medium) changes the net charge state of the respective site and hence affects the charge distribution in the protein.Since the protonation states of different titratable groups depend on each other by virtue of their electrostatic interaction, the elucidation of the protonation pattern of a PPC on the basis of a crystal structure poses a formidable problem.This problem can be tackled by methods that involve a numerical solution of the linearized Poisson-Boltzmann equation (LPBE) and Monte-Carlo techniques.In the following sections, we shall briefly summarize how these methods work (for further details, see ref. 80-82 and references therein).3][84] Our PB/QC method is a further extension of this approach to the calculation of excited state energy shifts.Thus, parts of the following sections are also the basis for the description of site energy calculations in Section 3.1.4.
Since each titratable site has two possible protonation states, a PPC with N titratable groups has 2 N possible protonation patterns.(In the case of His, even three possible protonation states are taken into account.For the sake of simplicity, we shall not consider this case explicitly in the following.)A protonation pattern can be characterized by a vector where x (s) m = 1, if site m is protonated, and x (s) m = 0, if it is deprotonated.Here, s = 1,. ..,2N counts the protonation patterns and m = 1,. ..,N the titratable sites.In thermal equilibrium, the protonation probability hx m i of group m is obtained from the Boltzmann average where G s is the Gibbs free energy of protonation pattern s.Due to the huge number of possible protonation patterns, direct evaluation of the statistical average is normally impossible.The solution to this problem is the application of a Monte-Carlo technique described in detail earlier. 82Note that G s contains entropic contributions arising from the distribution of protons in the solution and the polarization of the dielectric.Hence, a free energy is used instead of an energy.state x (0) s and defining G s with respect to this state, that is, setting G 0 = 0.The choice of the reference state is arbitrary.We take the state of the protein, in which all titratable groups are uncharged, as the reference state.Hence, x (0) m = 0 for Arg, His, N-terminus, Lys and x (0) m = 1 for Asp, Glu, C-terminus, Cys, Tyr.The free energy of an arbitrary protonation pattern s can then be written as where DG p (A m H,A m ) is the free energy difference between the protonated (A m H) and the deprotonated (A m ) state of the proteinbound group m at a given pH of the surrounding solution and with all the remaining titratable sites in their reference state.
The second term is a correction for the interaction between titratable residues, where we introduced the dimensionless formal charge z m of the deprotonated group, i.e., z m = 0 for Arg, His, N-terminus, Lys and z m = À1 for Asp, Glu, C-terminus, Cys, Tyr.These formal charges are such that the second line only has a non-zero contribution, if both sites m and n are in their non-reference (i.e., charged) state.It is clear that in this case the DG p (A m H,A m ) and DG p (A n H,A n ) alone cannot describe the whole change in free energy, since the latter only consider single non-reference states and, therefore, miss the interaction between two such states.The detailed form of W mn will be inferred below.We will first discuss the calculation of DG p (A m H,A m ).
Thermodynamic cycle.
The DG p (A m H,A m ) contains an electrostatic contribution and a contribution that is due to the breaking of the chemical bond between the proton and the amino acid A m .The calculation of the latter part can be circumvented by considering the thermodynamic cycle in Fig. 1.In this figure are depicted the four relevant states of the titratable group: protonated (A m H) and deprotonated (A m ) and in each case either bound to the protein (left, index p) or isolated in an aqueous solvent (right, index s).Accordingly, DG p (A m H,A m ) and DG s (A m H,A m ) are the Gibbs free energies of deprotonation for the protein-bound and isolated group, respectively, at a given pH, while DG sp (A m ) and DG sp (A m H) are the Gibbs free energies of transfer from the aqueous environment to the protein site for the deprotonated and protonated form of the group, respectively.The thermodynamic cycle connecting these states (Fig. 1) indicates that Here, DG s (A m H,A m ) at a given pH value of the solution can be determined from the experimental pK a value of a model compound that represents the titratable group in an aqueous environment (for a comprehensive list of pK a values and references, see ref. 82) To complete the thermodynamic cycle, we have to calculate DG sp (A m ) and DG sp (A m H).This is done on the basis of an electrostatic model of the protein.
Electrostatic model of the protein.
The protein is modeled as a system of point charges situated in a dielectric medium and surrounded by a solution containing ions.These point charges are the atomic partial charges of a MM force field that are assigned to the atom positions of the PPC.The positions of heavy atoms are inferred from the crystal structure of the PPC, while the positions of hydrogen atoms are determined by MM modeling.In modeling the dielectric medium, in which the partial charges are placed, a compromise has to be made between precision and computational effort.The usual procedure is to distinguish between the volume occupied by the atoms of the protein and the environmental aqueous phase and to assign different static dielectric constants to the different regions.In the case of membrane proteins, a further distinction can be made between the membrane region and the remainder of the surrounding medium.Moreover, an ionic strength is assigned to the aqueous phase to represent the ions.Within the electrostatic model, the transfer free energies DG sp (A m ) and DG sp (A m H) are simply the difference in the electrostatic interaction energy W of atomic partial charges on group m with their environment between the protein (p) and the solution (s).More precisely, only those atomic partial charges Q a of group m are considered that differ in the protonated and deprotonated form of the group (where the index a labels the corresponding atoms).These charges produce an electrostatic potential f that is obtained as the solution of the LPBE: where R a is the position of the ath atom of group m.In eqn (20), e(r) and k(r) are the position dependent dielectric constant and Whereas the solvent properties I solv and e solv can be determined experimentally, the membrane and in particular the protein dielectric constants are not so well defined, because of inhomogeneities and rigidities of the latter parts.Accordingly, there has been a long-lasting debate about the appropriate value of e p to be used in the computation of protonation patterns. 81,82,87,88In early work, a value of e p = 4 was suggested 89,90 and is still widely used.The problem of heterogeneity in the protein interior is actually not as severe as one might think at first sight.Simonson and Perahia 91 tackled this problem by microscopic simulations.They found that the fast component of the polarization can be well approximated by a homogeneous continuum model with an optical dielectric constant e opt = n 2 = 2 (a result that is exploited in the Poisson-TrEsp method, see Section 3.2.2).3][94] In the case of membrane proteins, it is possible to approximate the dielectric properties of the membrane (or a detergent belt) by defining a slab outside the PPC, to which e mem is assigned 95 as implemented in the software TAPBS 96 based on APBS. 97We use e mem = 2 akin to a liquid hydrocarbon phase.An illustrative example is shown in Fig. 2. We note that there are other membrane models [98][99][100] that remain to be tested in the present context of photosynthetic light-harvesting complexes.Another application of the slab approximation is the modeling of a water soluble PPC in the interstitial region between two membranes (e.g. the FMO protein between the cytoplasmic membrane/ reaction center complex and the chlorosome/baseplate 48 ).
The LPBE can be solved numerically on a grid as detailed elsewhere. 81,82,87,103,104The interaction energy W is then given by where q i = q i (0,0) and R i are the partial charge and position, respectively, of atom i of the environment in the electronic ground state.The sum over i in eqn ( 21) runs over all environmental atoms including those of the other groups n a m with partial charges corresponding to the reference protonation state as well as those atoms of group m that carry the same partial charge in both protonation states.The two terms in eqn ( 21) represent the interaction of the charges Q a with the background charges q i and with their own reaction field contained in f(r).In order to determine W for the four states depicted in Fig. 1, the LPBE has to be solved four times for each group m with two different charge sets a corresponding to the protonated (index h) and deprotonated form (index d) of m and with two different dielectric environments corresponding to protein (p) and solvent (s).(In the case of His, even three different protonation states are considered, and the LPBE has to be solved six times.)Thus, in general, we obtain for each group four different electrostatic potentials f (h/d) p,m/s,m (r) that, when put into eqn (21), yield four different energy terms W (h/d) p/s finally resulting in By using eqn ( 19), ( 21) and ( 22), we obtain from eqn (18) the deprotonation free energy of group m for the reference protonation state of the protein as with the background charge term and the polarization term (often referred to as ''Born-term'') Fig. 2 Assignment of static dielectric constants to different regions of space in the calculation of protonation patterns of trimeric LHCII. 101The figure created with VMD. 102ote that the first sum in eqn (24) runs over all background charges of the PPC and the second sum over all background charges of the model compound.Note also that the sum over a in eqn (25) contains no self-interaction of any charge Q a , since the corresponding terms drop out in the calculation of the differences.Finally, we identify the correction term W mn in eqn (17) from the following considerations.As seen above, the change in free energy contains a background and a polarization term.The coupling between charge densities contributes to the background term.If we consider two titratable groups m and n, which are both in their non-reference (charged) state, the sum over i has to be corrected for those atoms that belong to the titratable group n.This correction is done by the term in eqn (17) with If, for example, we consider a state where n is a protonated Arg, the respective q i in eqn ( 24) correspond to Q (d) a,n which are replaced by the non-reference state charges Q (h) a,n due to eqn ( 26) and ( 27), since x (s) n + z n = 1 in this case.If, on the other hand, n is a deprotonated Asp, the q i in eqn (24) would correspond to Q (h) a,n , which are replaced by 3.1.2.4.Temperature dependence of the protonation pattern.PPCs are routinely investigated at cryogenic temperatures.In order to obtain an optically transparent sample for spectroscopic studies, the buffered aqueous solution has to be supplemented with a glass-forming agent to a significant amount (e.g., 70% glycerol).For an adequate structure-based simulation of optical spectra under these conditions, we have to take into account the influence of temperature and the dielectric properties of the glass on the protonation pattern.In earlier work, 45 we assumed the protonation states to be equilibrated at all temperatures.Later, we learned that this assumption was a severe oversimplification.Unfortunately, information about proton activities and pK a values under cryo-conditions is scarce.The only work that we are aware of is that of Schulze et al. 105 who studied proton activities in 70% aqueous glycerol as a function of temperature.They found that above 210 K, the pK a values followed the expected behavior (i.e., obeying van't Hoff's equation with a constant enthalpy for deprotonation).Therefore, we concluded that in this temperature range, the standard procedure outlined above can be applied with e p = 4, e solv = 80 and e mem = 2 as reasonable approximations and using the appropriate temperature in the Monte-Carlo titration. 101The only critical parameter is the pH, which has to be rescaled according to the temperature coefficient of the used buffer.Thereby, it is assumed that the glass-forming agent has a negligible influence on the temperature coefficient based on the work of Douzou. 106Schulze et al. 105 found the pK a values to approach constants between 210 and 180 K, which they ascribed to an increase in the viscosity of the glycerol-water mixture close to the glass transition at 180 K.At this temperature, the acid-base-equilibria were found to be frozen in, and no further change of pK a values or proton activities was found upon further cooling.Hence, proton transfer is kinetically hindered at temperatures below 210 K, and the protonation pattern does not represent that of a true thermal equilibrium.This interpretation is further substantiated by the finding of Schulze et al. 105 that the cooling rate has an influence on the finally established pK a value.Based on these results, we use in our simulations, as a first approximation, the equilibrium protonation pattern established at 210 K, also at lower temperatures.We note that the problem of non-equilibrium protonation states in cryo-samples requires further research.
3.1.3.Protonation state dependent site energies with the CDC method.In order to take into account the dependence of the site energy E m in eqn ( 14) on the protonation state s of the PPC, we introduce a site energy value E m (0) which refers to the site energy obtained for the reference protonation state and take into account the deviations from this value.We have where E m (0) is given by eqn ( 14) and for the atom J = a of titratable group Z = m includes the partial charge of the reference protonation state.The second term on the r.h.s. of eqn (28) with corrects the Coulomb interaction contained in E m (0) between pigment m and those titratable groups m that are in a nonreference state.There are different possibilities to proceed with the calculation of energy transfer and optical spectra as will be discussed further below.Before that we will describe another method for the calculation of site energies.
3.1.4.The PB/QC method.In the Poisson-Boltzmann/ quantum chemical (PB/QC) method 45,46,101,107 besides the first-order contributions in eqn (10) representing the charge density interaction, also the part of the second-order site energy shift, that has been identified as solvation energy W a (eqn ( 13)) and higher order terms, representing the screening and local field corrections of the charge density interaction, are explicitly taken into account.This is done in an approximate way by assigning dielectric constants to certain regions of space and solving the LPBE.Any dispersive interactions, which are of the type given in the fourth term on the r.h.s. of eqn (10) (b a 0), are neglected.
3.1.4.1.Thermodynamic cycle.In Fig. 3 are depicted four relevant states involving pigment m: the state |mi of the PPC, in which pigment m is in its first electronic excited singlet state S 1 and all other pigments n a m are in their electronic ground state S 0 , the state |0i, in which all pigments are in their electronic ground state S 0 , as well as the electronic ground state |S 0 i and first excited state |S 1 i of the isolated pigment in an aqueous environment.The site energy is defined as a vertical transition energy, i.e., with the nuclei fixed to the equilibrium positions of the electronic ground state.In the framework of an electrostatic solvation model, this means that the excited state is equilibrated only with respect to the fast polarization component of the environment representing the instantaneous response of the solvent or protein electrons to the change of the pigments' electronic state.However, the slow polarization component of the environment needs more time to relax, so that the excited state is initially out of equilibrium.Thus, the site energy E m of pigment m has two contributions: where E 0 m is the energy difference between excited and ground states, when both are fully equilibrated, and E l (m) is the reorganization energy.In the following, the calculation of E 0 m is described.The treatment of E l (m) will be discussed in Section 3.1.4.4.The thermodynamic cycle in Fig. 3 indicates that Here, DG sp (S 1 ) and DG sp (S 0 ) are the Gibbs free energies of transfer from the aqueous environment to the protein site for the pigment in the first excited and the ground state, respectively, which are computed on the basis of the LPBE (see Section 3.1.4.2).E 0 is a reference value that corresponds to the transition energy of the pigment in the solvent environment.At present, E 0 is an adjustable parameter that is determined from a comparison of simulated and measured optical spectra of the PPC.Note that E 0 is different for chemically distinct pigments (e.g., chlorophylls a and b in LHCII).However, the site energies of chemically identical pigments are calculated with respect to the same reference value E 0 , so that the actually computed quantities are the site energy differences between pigments of the same type.In the application of eqn (31), conformational variations between pigments in different sites are neglected.The inclusion of these variations requires a quantum chemical treatment that is discussed in Section 3.1.5.Direct experimental information about E 0 is not available, because the pigments are not water-soluble.A possible solution to this problem is the extension of the thermodynamic cycle in Fig. 3 to include the transfer of the pigment from a non-aqueous solvent to water.Then, one has to critically check the limits of a continuum description of the solvent.
3.1.4.2.Calculation of site energies based on the LPBE.The electrostatic model of the PPC is the same as described in Section 3.1.2.3.In addition, we need for each pigment the two sets of atomic partial charges q (m) I (0,0) and q (m) I (1,1) introduced in Section 3.1.1,describing the permanent charge distribution of the chromophore in the ground and the excited state, respectively.We note that these charges are the same for chemically identical pigments as long as conformational variations are neglected.Nonetheless, the index m is justified, as the position, to which the charge is assigned, is different for different sites.The determination of these charges is discussed in Section 3.1.5.As in the case of the CDC method, the protonation state dependent site energy E 0 m ðsÞ can be written as where E 0 m ð0Þ is the site energy obtained for the reference protonation state, which in addition to the charge density coupling term (background term DDG (m) back ) includes a polarization term DDG (m) pol with and Here f (0/1) p,m/s,m are the four solutions of the LPBE (eqn (20) with Q a replaced by q (m) I ) corresponding to the four states depicted in Fig. 3 with (0) and (1) referring to the charge sets q (m) I (0,0) and q (m) I (1,1), respectively.The background charges q i comprise all partial charges in the PPC that do not belong to pigment m (including those of all pigments n a m in their electronic ground state) as well as those charges of pigment m that are the same in the ground and the excited state (e.g., the phytyl chains of chlorophylls).The latter subset is termed q k in the second sum in eqn (34) (representing the pigment in an aqueous environment).In complete analogy to eqn (27) and (29), the Coulomb correction term W mm for non-reference protonation states in eqn (32) reads 3.1.4.3.Choice of dielectric constants.In contrast to the protonation states discussed in Section 3.1.2.4, the solvent dielectric constant e solv relevant to the calculation of site energies is the one found for the temperature, at which the experiments are performed that are to be simulated.Thus, we use e solv = 80 for ambient temperatures.At cryogenic temperatures, one has to take into account that the protonation patterns are frozen below 210 K and that the static dielectric constant of a glass-forming medium changes drastically due to the glass transition.Unfortunately, information about dielectric constants of the glasses used in optical experiments with light-harvesting complexes under cryo-conditions is as scarce as information about proton activities.Yu 108 investigated glycerol-water mixtures.The data suggest that e solv = 5 is a reasonable approximation at temperatures below 200 K, but the dielectric constant in the range of 4 to 77 K relevant for most spectroscopic data is actually unknown.In applications of the PB/QC method, the effective dielectric constant of the protein ẽp is varied to optimize simulated optical spectra and, therefore, is essentially an adjustable parameter.This parameter not only accounts for dielectric screening and local field effects in the protein interior at different temperatures, but also compensates for a possible inadequateness of the quantum chemical charge sets of the pigments.Thus, ẽp does not solely represent the static polarizability of the protein.As a consequence, its value used in site energy calculations may differ from e p used for protonation patterns and may be even smaller than e opt .
3.1.4.4.Non-equilibrium corrections.Finally, we turn to the non-equilibrium correction E l (m) of the site energy shift in eqn (30).As discussed above the optical excitation occurs between the equilibrium ground state and an excited state of the PPC where only the electronic polarization is in equilibrium, but the nuclear polarization is not.Hence, we may write the transition energy as where G ˜m is the non-equilibrium free energy of the excited state and G 0 is the equilibrium free energy of the ground state.According to Marcus, 109 a non-equilibrium free energy of an excited state may be obtained from the equilibrium free energy G m of the excited state and equilibrium free energies G (1À0,opt) m and G (1À0) m of two fictitious systems, which carry the charge density difference between excited and ground states of state |mi of the PPC and are embedded in a dielectric with optical dielectric constant e opt or static dielectric constant e, respectively, With eqn (30) and E 0 m ¼ G m À G 0 considered in the thermodynamic cycle above, we obtain for the reorganization energy Since the terms on the r.h.s.only contain charge differences, which are zero for all atoms of the PPC except for those on pigment m, which take part in the excitation, i.e., Dq (m) for f(r) = e opt (r) and f(r) = e(r).
It turned out that the site-dependence of E l (m) is weak and can be neglected in applications of the PB/QC method. 45,101his result does not mean that E l (m) is really site-independent, but rather that the electrostatic continuum model of the PPC is not able to reveal such a dependence.The recently introduced calculation of exciton-vibrational coupling on the basis of a NMA offers a new approach to the reorganization energy discussed further below (eqn ( 55) and ( 59)).Results obtained so far for the FMO protein suggest that, indeed, the contribution of E l (m) to site energy differences is small. 51.1.5.Atomic partial charges of the pigments and quantum chemical correction.In the CDC and PB/QC methods, relative site energy shifts are calculated by electrostatics rather than site energies E m directly by quantum chemistry (QC).Nonetheless, the electrostatic methods use atomic partial charges q (m) I (1,1) and q (m) I (0,0) of the pigments obtained from QC, based on methods described in detail earlier. 75If conformational variations between pigments in different sites of the PPC are neglected, these calculations are based on pigment structures optimized in a vacuum.Here, one encounters the problem that different QC methods (e.g., different exchange-correlation functionals in time-dependent density functional theory) may produce different sets of atomic partial charges and, hence, different site energy shifts, and it is not clear a priori, which method to use.Decisions can be made based on a comparison of simulated and experimental optical spectra of the PPC, but in this way of evaluating the charge sets, the result depends on details of modeling the optical lineshape, so that different approaches may suggest different charge sets to be optimal (see, e.g., LHCII 110 ).Ideally, the QC method should reproduce several properties of the pigment molecule with high accuracy, so that the electronic wavefunctions and atomic partial charges are trustworthy without such an a posteriori evaluation, but this ideal is far from being reached for molecules as large as photosynthetic pigments.Nonetheless, present research activities aim at an implementation of recent developments in QC 3,4,111 to closer approach this ideal.
If conformational variations between chemically identical pigments in different sites become relevant, one encounters two additional problems: (i) the QC calculations require a careful structure re-optimization of each individual pigment, while keeping the characteristics of the conformational change by introducing constraints. 45,46,112Sub-optimal structures can cause large errors.(ii) The site energy shift has an additional contribution from the changed electronic wavefunction of the pigment, the quantum-chemical correction.The problem of the limited accuracy of the QC method then shows up in a new guise, as there is now not only a QC influence on the atomic partial charges, but also a direct QC contribution to the transition energy shift. 45As with the unconstraint pigments, there is research activity to investigate this contribution by exploiting recent innovations in QC methodology. 3,4,111.1.6.Protonation state dependent site energies and calculation of energy transfer and optical spectra.There are different possibilities to proceed after the protonation state dependent site energies E m (s) have been determined either with CDC (eqn ( 28)) or with PB/QC (eqn ( 30) and ( 32)): (a) we can take the site energies E m (s) and excitonic couplings V mn (determined as described in the next section) and put them into the exciton Hamiltonian H ex in eqn (2), resulting in H ex (s).Based on the latter we can calculate optical spectra, energy transfer rates etc. for each protonation pattern separately and determine the proper thermal average over these spectra or rates by taking into account the Gibbs free energies G s (eqn (17)).This procedure is cumbersome and has not yet been applied, but work in this direction is in progress.(b) As an approximation, we can thermally average the site energies over protonation patterns and use the averaged site energies as input to H ex .This procedure has been applied in our earlier work on the FMO protein, 45 but is actually an oversimplification.(c) We can single out the most probable protonation pattern or a small fraction of patterns.In particular at cryogenic temperatures, where the optical spectra with the highest resolution are measured, most of the protonation probabilities are close to either zero or one.Specifically, we set x (s) m = 1 in eqn ( 28) and (32), if hx m i Z 0.8, and x (s) m = 0, if hx m i r 0.2.For groups with 0.2 r hx m i r 0.8, the two cases x (s) m = 0 and x (s) m = 1 are checked separately for their influence on site energies.For an application of this procedure, see ref. 48, 101 and 107.If these influences are significant, spectra, etc. can be calculated for a smaller number of protonation patterns and averaged by taking into account the free energy G s of these patterns.
Excitonic couplings
As noted before, the excitonic coupling, in general, contains a short-range contribution due to electron exchange and a Coulombic part, where the electrons stay at their molecules 73 (for recent reviews see ref. 11 and 113).The Coulombic part, which dominates the large majority of excitonic couplings between photosynthetic pigments, V mn is obtained from the matrix element in eqn (8) reading The integration is over the spatial coordinates of the N electrons r 1 ,. ..,rN m of molecule A m and % r 1 ,. ..,% r N n of molecule B n .The integration over the respective spin variables is also included, but not explicitly denoted, and real wavefunctions are assumed for simplicity.The intermolecular Coulomb coupling between pigments V (mn) AB in eqn (8) contains the intermolecular coupling between electrons, between electrons and nuclei and between nuclei.Of these three contributions, only the first is shown in eqn (42), since it is the only one that gives a non-zero contribution, because of the orthogonality of electronic wavefunctions, i.e., hA (m) 1 |A (m) 0 i = hB (n) 0 |B (n) 1 i = 0.By using Pauli's principle for the exchange of electrons and changing names of integration variables, the above matrix element can be written as 75 It is seen thereby that the integrations over coordinates r 2 . ..rN m and % r 2 . ..% r N n can be performed by introducing one-particle densities r (m) 10 (r 1 ) and r (n) 10 (% r 1 ) with of molecule A m and similarly r (n) 01 (% r 1 ) of molecule B n .The matrix element in eqn (43) then follows from the Coulomb coupling of the transition densities of the two pigments: The above 6-dimensional integral may be evaluated numerically, as in the transition density cube (TDC) method. 114 numerically much less involved method of the same accuracy is given by the transition charge from the electrostatic potential (TrEsp) method, 75 which will be discussed in the next subsection.
For large intermolecular distances, a multipole expansion may be used to evaluate the integral in eqn (45).By using the orthogonality of molecular wavefunctions, which gives rise to the relation R drr ðm=nÞ 10 ðrÞ ¼ 0, it is seen that the first non-vanishing contribution to V mn is due to the coupling This journal is c the Owner Societies 2013 between transition dipole moments, the point-dipole approximation: where the transition dipole moment d (10) m is the first moment of the transition density r (m) 10 (r), Examples for the validity of the point-dipole approximation are discussed below.Finally, we note that the shape of the transition density of chlorine type pigments has a dipolar form that can be qualitatively well approximated by two point charges of opposite sign that are placed at a distance of about 9 Å. 75,77 Therefore, often a considerable improvement in the point-dipole approximation is obtained by just replacing the point-dipole by an extended dipole.
3.2.1.TrEsp method.In the TrEsp method, 75 the Coulomb coupling V mn between transition densities in eqn ( 45) is evaluated by using atomic partial charges that are determined such that they fit the ESP of the transition densities r (m) 10 (r 1 ) and r (n) 10 (r 2 ).The numerical fit is performed on a three dimensional grid surrounding the pigments.
A scaling factor f for the Coulomb coupling is included in order to implicitly take into account screening and local field effects by the dielectric environment, 3.2.2.Poisson-TrEsp method.In the Poisson-TrEsp method, the influence of the dielectric environment on the excitonic coupling is modelled explicitly, removing thereby the unknown scaling factor of the TrEsp approach discussed above.As quantum mechanical derivations suggest, 116,117 where the dielectric constant e(r) = 1, if r points inside a pigment cavity, and e(r) = n 2 otherwise.The excitonic coupling of pigment m with pigment n is then obtained as The transition charges q (m) I (0,1) of the pigments are obtained and corrected as described above for the TrEsp method.The value of the optical dielectric constant of PPCs is approximately 2, as determined 119 from the change in oscillator strength of protein-bound and solvent-extracted pigments of photosystem I, 120 and from microscopic simulations. 91Comparison of Poisson-TrEsp couplings obtained with e = 2 and e = 1 (corresponding to TrEsp with f = 1) showed that the screening and local field corrections of the Coulomb coupling can be well approximated by a constant factor f, which varies between 0.6 107 and 0.8 118 for the different complexes investigated. 47,107,110,118Detailed analysis shows that f depends on the mutual orientation of the pigments rather than on their distance.In exceptional cases, 117 the surrounding dielectric can increase the excitonic coupling between two pigments compared to vacuum. 116,117Comparison of the Poisson-TrEsp and TrEsp couplings to excitonic couplings obtained with the pointdipole and extended-dipole approximations shows that the point-dipole approximation is reasonable for the FMO protein (we note, however, that the effective dipole strength has to be determined with Poisson-TrEsp).In the case of the LHCII complex, the point-dipole approximation is reasonable as well, except for one pigment pair. 101In the case of the CP43 core antennae of photosystem II, the coupling of one pigment pair connecting the stromal and the lumenal layer of pigments shows large deviations in point-dipole approximation, whereas an extended dipole approximation was found to be valid. 107Finally, neither the point-nor the extended-dipole approximation is valid for a large number of closely spaced pigments in photosystem I. 47
Spectral density
So far, we have determined site energies and excitonic couplings for the equilibrium positions of nuclei in the electronic ground state of the PPC.In the following, we want to study how these quantities change, if the nuclei are displaced, and determine the linear exciton-vibrational coupling constants g x (m,n) introduced in eqn (4).For this purpose, we consider the coordinate dependence of the exciton matrix elements H mn , whose equilibrium position values H (0) mn were introduced in eqn (3).The H mn are expanded into a Taylor series with respect to small displacements of the positions R J of atoms J = 1. ..N atom of the PPC from their equilibrium values R (0) J .
This journal is c the Owner Societies 2013 Phys.Chem.Chem.Phys., 2013, 15, 3348--3371 3361 Including terms up to first order in the displacements gives where H (0) mn and (r J H mn | 0 ) are the values of H mn and of its gradient taken with respect to the three Cartesian coordinates of atom J, respectively, at the equilibrium position of nuclei in the electronic ground state of the PPC.
The mass-weighted normal coordinates q x (t) are related to the displacements (R J (t) À R (0) J ) by 12 where M J is the mass of atom J and A (x) J contains the contributions of this atom to the eigenvector of normal mode x.From eqn ( 51) and ( 52) we obtain, using also which equals H ex + H ex-vib introduced in eqn ( 2) and ( 4), if the dimensionless coupling constant g x (m,n) is introduced as In order to evaluate the r.h.s. of ( 54), we need to know how the matrix elements H mn ({R J }) depend on the nuclear coordinates R J .These dependencies are revealed by the TrEsp and CDC methods introduced above.Using eqn ( 14) and ( 48 where the equilibrium vectors R (0) J are taken from the crystal structure after modeling of hydrogen atoms and energy minimization.The A (x) J are obtained from the eigenvectors of the NMA, providing also the vibrational frequencies o x .Since a NMA provides a microscopic model for the atomic polarization, the e eff should be chosen smaller than the one used for static site energy calculations with the CDC method (eqn ( 14)).In the application to the FMO protein, we used e eff = 1/f = 1.25,where f = 0.8 was obtained from a comparison of Poisson-TrEsp and TrEsp excitonic couplings 118 and takes into account the effect of the electronic polarizability.The resulting spectral density J mnkl (o) is dominated by site energy fluctuations (J mmmm (o)) and correlations between site energy fluctuations (J mmnn (o), m a n), whereas those parts containing fluctuations of excitonic couplings are at least one order of magnitude smaller. 51The dominating diagonal parts of the spectral density J mmmm (o) are close in shape and in magnitude to the experimental J(o) (Fig. 4).There are, however, some systematic deviations.At low frequencies, the NMA spectral density is above and at higher frequencies it is below the experimental values.One reason could be the neglect of anharmonicities, another reason the neglect of intramolecular modes of the pigments.Anharmonicities were included in a recent combination of molecular dynamics with CDC by Jing et al., 36 which, however, did not reach long enough timescales to resolve the important lowfrequency region of the spectral density and the correlations in site energy fluctuations.Intramolecular modes of the pigments can be included by performing a QC-based NMA for the pigments in their ground and excited states. 36,121
Critical approximations and comparison with other methods
In most of our applications of the methods described above to chlorophyll-binding PPCs, we neglected conformational variations between pigments in different sites.This appears to be a suitable approximation so far. 45,46,48,101,110Nonetheless, additional contributions to site energy shifts can be expected, as it is well known that out-of-plane distortions of tetrapyrroles have an influence on their optical spectra. 122,123This concerns the out-of-plane orientation of conjugated substituents such as acetyl or vinyl groups as well as of the four pyrrole moieties and eventually the isocyclic ring E, leading to a distortion of the p-system (for a discussion of chlorophyll structures, P 7 1 J mmmm ðoÞ obtained by NMA on the monomeric subunit of the FMO protein (histogram, red bars) 51 compared to spectral densities extracted from experimental data.The black solid line was obtained 59 from an analysis of fluorescence line narrowing spectra of the B777 complex 55 and the blue solid line from the FLN spectra of the FMO protein. 56The area under the curves corresponds to the Huang-Rhys factor S, which for the two experimental spectral densities was obtained from a fit of the temperature dependence of the absorbance spectrum of the FMO protein resulting in S = 0.42.The NMA value for the average S is 0.39.see ref. 124).To take into account these conformational variations in a QC/E2 approach, one has to add a quantum chemical correction to the electrostatically calculated site energy shift as discussed in Section 3.1.5besides considering deformation effects on the charge distributions and transition densities.Such a procedure has two strict requirements: (i) the crystal structure of the PPC has to be of sufficiently high resolution to get reliable information about structural variations, and (ii) a QC method has to be available that allows for a sufficiently accurate determination of molecular properties as a function of nuclear coordinates.Concerning requirement (i), we think that the crystal structure has to have a resolution of 2.0 Å or better.At lower resolutions, it is particularly difficult to pinpoint the orientation of conjugated substituents (for a recent example, see ref. 125).Concerning requirement (ii), there is ongoing research activity to improve electronic structure methods. 3,4,111As an alternative to a rigorous ab initio calculation, one may apply a semi-empirical approach.First steps in this direction were taken by Zucchelli et al., 126,127 who estimated ring deformation effects on the chlorophyll site energies in LHCII.Their method is based on a normal mode decomposition method developed by Shelnutt and coworkers, 123,128 who found that the distortions of protein-bound porphyrins are dominated by displacements along the macrocycle normal modes with the lowest frequencies.By projecting the pigment conformations, found in the crystal structures, on the eigenvectors of the low frequency normal modes, the structural information for an evaluation of site energy changes due to macrocycle deformations becomes available.However, for a comparison with experimental data, besides the influence of the substituents discussed above, one has to take into account also the electrostatic pigment-protein interaction.Combining the normal mode decomposition method reported by Zucchelli et al. with the CDC or PB/QC methods might be a promising direction for future work.
A subtlety, when combining classical force field calculations of molecular geometries with QC calculations of electronic transition energies lies in the geometry mismatch.The latter is due to the fact that the classical force field constants are not fully compatible with the QC. 36,51This artifact most likely is responsible for the drastic deviations observed between spectral densities of the exciton-vibrational coupling calculated by using a QM/MM approach (see the Introduction) and the experimental data (e.g.Fig. 5d of ref. 37 and Fig. 6 of ref. 33), in particular at high frequencies.Fortunately, for chlorine type photosynthetic pigments the Franck-Condon factors of intramolecular vibronic transitions are very small and the spectral density in the energy range relevant for exciton relaxation is dominated by intermolecular vibrational degrees of freedom.As discussed above, the latter has been studied with NMA, 51 revealing good qualitative agreement, but also some systematic deviations.The latter might be removed by including anharmonic molecular motion and intramolecular vibrational degrees of freedom, e.g., by MD simulations and QC-based NMA, respectively.
Another object of future research concerns the inclusion of so-far missing terms of the perturbation theory.These terms, e.g., include dispersive and inductive intermolecular couplings that may involve the heterogeneous polarizability of the PPC.
Within polarizable continuum models (PCMs), [39][40][41][42] it is possible to include the homogeneous polarizability of the environment directly in QC calculations of electronic properties.However, the neglect of Pauli repulsion between the electron density of the QC part and the environment in this type of treatment may cause an overpolarization of the former, also known as an electron leakage problem. 3,53,54The inclusion of the charge density of the environment by classical point charges in QC calculations 30,33,37 causes the same problem.To avoid these artifacts, a two step procedure can be applied, as discussed above, however, at the expense of neglecting all polarization effects on the wavefunctions of the pigments.A quantification of these effects is an important future goal.
A first treatment of the heterogeneous polarizability of the protein, in the framework of a polarizable force field model, was presented recently by Curutchet et al. 38 in calculations of excitonic couplings.They reported an enhancement of resulting energy transfer rates by as much as a factor of 4, as compared to calculations assuming a homogeneous dielectric environment 38 with average dielectric constant.
Another promising route to include the mutual polarization of building blocks of the PPC is the density-fragment interaction (DFI) approach proposed by Fujimoto and Yang. 29This method has recently been applied to calculate excitonic couplings (referred to as the transition density fragment interaction (TDFI) method) either excluding 129,130 or including 131 interpigment electron exchange.What is still missing is the effect of the polarization of the protein environment.A combination of TDFI with Poisson-TrEsp might be one possible way to go.Related in spirit to DFI is the subsystem DFT approach. 3n the methods discussed in the previous subsections there is still one adjustable parameter, namely the transition energy E 0 .† So far, this quantity can only be evaluated indirectly via the calculation of optical spectra.As long as a single pigment type is considered, varying E 0 would just displace the resulting optical spectrum along the energy (wavelength) axis and would have no influence on its shape.In the case of different pigment types absorbing in close spectral regions, the determination of the related E 0 values becomes more ambiguous.In any case, improved QC calculations would be desirable for an independent evaluation of these parameters.We will not discuss the pitfalls and challenges of QC calculations, but refer, instead, to some comprehensive reviews on this topic. 3,4,111his journal is c the Owner Societies 2013 Phys.Chem.Chem.Phys., 2013, 15, 3348--3371 3363 4
Quantum dynamics of excitons
In this section, we use the exciton Hamiltonian given in Section 2, which was motivated and parameterized by microscopic theory in the previous section, to study excitation energy transfer.Different possible scenarios are discussed, which result from different relative strengths of pigmentpigment and pigment-protein couplings.
Weak excitonic coupling
In the case of weak excitonic coupling, it is reasonable to assume that the exciton-vibrational coupling is dominated by the fluctuation of site energies and that the fluctuation of excitonic couplings is negligible.Then, the exciton-vibrational coupling constants g x (m,n) in H ex-vib (eqn ( 4)) are dominated by the diagonal parts m = n.After optical excitation, the strong fluctuation of site energies on the one hand localizes the excited states, and on the other hand leads to a fast relaxation of nuclei in response to the change in charge density between the ground and the excited state of the pigment.In order to describe the change in equilibrium positions of nuclei, occurring after excitation energy transfer, we introduce potential energy surfaces (PES) of localized excited states of the PPC by rewriting the PPC Hamiltonian (eqn (1)) as where T nucl denotes the kinetic energy of nuclei, and the PES of the excited state of the PPC localized at pigment m reads It contains the energy difference E 0 m between the minimum of the PES of the excited state and that of the PES of the ground state.Due to the exciton-vibrational coupling, these PES are shifted with respect to each other by À2g x (m,m) along the coordinate axis.E 0 m is given as with the reorganization energy E l (m) of the mth excited state reading E l (m) is the energy that is released after optical excitation by relaxation of the nuclei into a new equilibrium position.It corresponds to the non-equilibrium correction obtained from continuum electrostatics calculations in eqn (40).If E l (m) is much larger than the excitonic coupling, the system can lower its energy more by keeping the excited states localized than by delocalization of excited states, discussed below.From another point of view, the vibrational environment introduces a large dephasing of electronic coherences that does not allow for delocalization to occur.The Liouville-von Neumann equation for the statistical operator W ˆmn in the representation of localized states reads For a perturbative treatment of the coupling V mn , we use the interaction representation ŴðIÞ mn ¼ U y m ðtÞ Ŵmn U n ðtÞ (61) where the time evolution operator U k (t) of the vibrational degrees of freedom in the PES of the kth electronic state is given as The equation of motion for W ˆ(I) mn then reads d dt which describes the vibrationally relaxed initial state.Since harmonic PES have been assumed, the trace over the vibrational degrees of freedom in eqn (67) can be carried out analytically (e.g. by using a second order cumulant expansion, which is exact for harmonic oscillators), giving This journal is c the Owner Societies 2013 with The time-dependent function G mn (t) is related to the spectral densities J mmmm (o), J nnnn (o), and J mmnn (o), characterizing the fluctuations of site energies of pigments m, n, and the correlations between both, respectively, by where and n(o) is the Bose-Einstein distribution function of vibrational quanta It is seen, thereby, that for perfectly correlated site energy fluctuations, i.e., for g x (m,m) = g x (n,n), there would be no energy transfer possible.4.1.1.Fo ¨rster theory.The Fo ¨rster rate constant follows from the above expressions, if two simplifying assumptions are made: (i) the generalized rate constant k m-n (t) decays rapidly on the time scale of the changes of the populations P m (t) and P n (t).In this case, the P k (t À t) (k = m,n) in the integral in eqn (66) may be approximated by their value at time t and taken out of the integral, and the upper integration limit may be formally extended to N, giving a master equation with the rate constant k m!n ðtÞ ¼ 1=2 dte iomnt e GmnðtÞÀGmnð0Þ : A second approximation of Fo ¨rster theory is to assume that there is no correlation in site energy fluctuations.In this case, J mmnn (o) = 0 for (m a n) in eqn ( 72), and we may write the rate constant as with the line shape function for donor emission and acceptor absorbance with We note that for correlated site energy fluctuations, such a factorization of the integrand in eqn (75) into donor and acceptor properties is not possible. 132
Strong excitonic coupling À Redfield theory
In the case of strong excitonic coupling, delocalized states ðMÞ m jmi are excited, which are obtained by diagonalizing the exciton Hamiltonian H ex in eqn (2) giving with eigenenergies ho M and eigenvectors containing the coefficients c (M) m .The exciton-vibrational coupling in the representation of delocalized states is treated in the following as a small perturbation to derive a rate constant k M-N for exciton relaxation between two delocalized states |Mi and |Ni.The following interaction representation of the statistical operator is used with the time evolution operators and of the excitonic and vibrational degrees of freedom, respectively, using the H vib in eqn (6).
where o KL = o K À o L , and the exciton-vibrational coupling VKL (eqn (80)).A quantum master equation can be derived along the same lines as for the weak coupling case studied above, but using this time a second order perturbation theory for the exciton vibrational coupling VðIÞ KL .The resulting Redfield-type rate constant reads where describes thermally equilibrated vibrational degrees of freedom of the electronic ground state of the complex.The shift in equilibrium position of nuclei that may occur upon changing the electronic state of the PPC is neglected.Later, we will include this shift in modified Redfield theory.For the harmonic oscillator Hamiltonian (eqn ( 6)), the trace over the vibrational degrees of freedom in eqn ( 85) can be performed and the rate constant can be related to the spectral density J mnkl (o), which enters the rate constant at the transition frequency o = AEo MN between the two exciton levels.This spectral density contains different contributions resulting from the fluctuations of site energies (J mmmm (o)), fluctuations of excitonic couplings (J mnmn (o)) and correlations among and between them.
Intermediate excitonic coupling
In the case of equal strengths of excitonic coupling and excitonvibrational coupling, in general, a non-perturbative approach is needed.Recent normal mode calculations of the microscopic coupling constants g x (m,n) of H ex-vib (eqn ( 4) and ( 55)) and a subsequent transformation of these coupling constants to the basis of delocalized states (eqn ( 81)) show that the diagonal elements g x (M,M) are larger than the off-diagonal elements g x (M,N) (M a N) (Fig. 5).Modified Redfield theory [13][14][15] and time-local Non-Markovian Density Matrix theory 59 make use of this inequality by providing an exact description of the diagonal parts and using perturbation theory only for the off-diagonal parts.4.3.1.Modified Redfield theory.If the diagonal parts g x (M,M) of the exciton-vibrational coupling dominate, it is appropriate to assume that there is fast nuclear relaxation in PES of exciton states, which are constructed by rewriting the Hamiltonian in the exciton representation (eqn ( 6), ( 79) and ( 80)) in the following way: where the PES of exciton state |Mi reads with the PES minimum at position À2g x (M,M).The energy contains the reorganization energy of exciton state |Mi The % V MN in eqn ( 88) comprise the off-diagonal elements of the exciton-vibrational coupling (eqn ( 81)).
Along the same lines as used above in the derivation of the rate constants for weak and strong excitonic couplings, a second order perturbation theory in % V MN yields the following expression for the rate constant k M-N where describes the time evolution of vibrational degrees of freedom in the PES of exciton state |Ki, and is the equilibrium statistical operator of nuclei in exciton state |Mi.
Comparison of eqn (92) with the Redfield rate constant (eqn ( 85)) shows that now the mutual shifts of PES, neglected before, are taken into account.A comparison with the Fo ¨rstertype rate constant (eqn ( 67)) shows that the coupling between the PES now depends linearly on the vibrational coordinates Q x , whereas it is coordinate-independent in eqn (67).Nevertheless, for harmonic PES as considered here, the trace over vibrational degrees of freedom can be performed giving 15 In Fig. 6, the relaxation of excitons in the monomeric subunit of the FMO protein, calculated by assuming a d-pulse excitation at t = 0, is shown.The site energies were obtained with the CDC method. 48The excitonic couplings were calculated by using a point-dipole approximation, 48 verified before by Poisson-TrEsp, 118 which also revealed the effective transition dipole strength to be used.The original spectral density J mnkl (o) was obtained directly from a combination of CDC and TrEsp with NMA. 51The corrected spectral density reads J c mnkl (o) = f(o)J mnkl (o) and contains a frequency-dependent factor f(o) = J exp (o)/% J(o) with the average diagonal part of the NMA spectral density JðoÞ ¼ 1 7 P 7 m¼1 J mmmm ðoÞ and the experimental spectral density J exp (o) of the FMO protein (Fig. 4).f(o) is introduced to correct for the limitations of the harmonic approximation.The larger amplitude of the high-frequency part of the corrected spectral density allows the protein to dissipate the excess energy of excitons faster (Fig. 6 and eqn ( 87)).The relaxation times obtained for the corrected spectral density are in agreement with pump-probe experiments. 133For both spectral densities, the relaxation obtained with modified Redfield theory is somewhat faster than the Redfield relaxation.This effect is due to the inclusion of multi-vibrational quanta transitions in modified Redfield theory. 14,134In this way, the protein can bridge the energy gaps between different exciton states by multiple vibrational quanta and not only by single quanta as in Redfield theory.Finally, we note that although the correlation in site energy fluctuations in the spectral density has a large amplitude, its influence on exciton relaxation was found to be negligible. 51The inhomogeneous charge distribution of the protein was found to be responsible for this effect.4.3.2.Non-perturbative approaches and explicit treatment of dynamic localization.A shortcoming of the perturbation theory in the off-diagonal parts of the exciton-vibrational coupling, used in the above modified Redfield theory, is the neglect of dynamic localization effects of the exciton wavefunction.For example, if two pigments are located at a large enough distance such that their excitonic coupling is much smaller than the local reorganization energy of the exciton-vibrational coupling, but they happen to have the same site energy, their wavefunction will be delocalized for all times, since the exciton coefficients resulting from the diagonalization of H ex are time-independent.However, in reality, the slightest fluctuation of the site energies would localize the wavefunction.Since the only approximation used above in Modified Redfield theory concerns the off-diagonal part of the exciton-vibrational coupling, we have to conclude that a description of dynamic localization requires a higher-order perturbation theory or an exact theory of the latter.Recently, such non-perturbative approaches became available through the development of the hierarchical equation of motion (HEOM) approach, [21][22][23][24] the density matrix renormalization/polynomial transformation approach, 25,26 and path integral techniques. 27,28Although numerically very expensive, these approaches will undoubtedly provide a deeper understanding of the exciton-vibrational motion in PPCs.The explicit treatment of dynamic localization effects will allow description of excitation energy transfer in networks with intermediate and weak couplings more realistically than with Generalized Fo ¨rster theory.A second important application is the inclusion of high-frequency pigment vibrations in the description of exciton dynamics.Since the related Frank-Condon factors of these high-frequency modes are very small, the effective excitonic coupling involving these excited vibronic transition are also small and the mixing with vibronic transitions of neighboring pigments can easily be affected by dynamic localization effects due to the coupling with protein vibrations.Finally, a third interesting application is in the description of optical properties of pigments at very close distance, where electron exchange becomes possible.In this case the mixing of exciton states with charge transfer states 70,73 leads to very strong exciton-vibrational coupling that can lead to dynamic localization effects. 135g. 6 Relaxation of excitons in the monomeric subunit of the FMO protein at T = 77 K after excitation by a d-pulse at t = 0, calculated with modified Redfield theory (solid lines) and Redfield theory (dashed lines) using either the spectral density J mnkl (o) obtained directly from the NMA or a corrected spectral density 5 Applications
The FMO protein
Although the structure of this protein has been known since 1975 136 it took until the end of the 1990s to obtain a realistic set of site energies and excitonic couplings.The key new assumption of Aartsma and coworkers 137 was a much smaller effective dipole strength of the pigments for the calculation of the excitonic couplings.In 2005, the first 2D spectra of the FMO protein were reported 60 and interpreted by using the Hamiltonian of Aartsma and coworkers, with a modification of one excitonic coupling.This modification was later shown to be incorrect. 118A calculation of excitonic couplings without adjustable parameters became possible with the Poisson-TrEsp method 118 and by the determination of the vacuum dipole strength of bacteriochlorophyll a (and related pigments) by Knox and Spring. 115The verification of Aartsma's effective dipole strength, the validity of the point-dipole approximation used for the calculation of excitonic couplings and of the inferred site energies by using also an improved lineshape theory in 2006 lead to the prediction of the relative orientation of the FMO protein with respect to the reaction center complex. 118This prediction was confirmed experimentally using chemical labeling and mass spectrometry by Blankenship and coworkers in 2009. 138In 2007, a structure-based simulation using PB/QC 45 supported the fitted site energies.It was found that the electric fields of two a-helix backbones contribute significantly to the creation of an energy sink at a particular pigment in the FMO protein.In 2007, 2D spectra showed longlived quantum beats 61 and triggered a fundamental discussion about the role of coherences and their possible protection by the protein, e.g., by correlated site energy fluctuations for the efficiency of light-harvesting.QM/MM approaches did not find any correlation in site energy fluctuations, but it was argued 36 that longer MD propagation times might be needed to resolve them.Indeed, a recent NMA/CDC/TrEsp analysis found strong correlations in site energy fluctuations at low frequencies.However, these correlations were shown to have practically no influence on the decay of coherences between different exciton states and on exciton relaxation. 51The inhomogeneous charge distribution of the FMO protein was found to be responsible for this effect.In this way, it became clear that the same mechanism, which creates an excitation energy funnel in this system, leads to a fast dissipation of the excitons's excess energy.In 2009, an eighth pigment was (re)discovered that is bound at the periphery of each monomeric subunit of the FMO protein. 139ts location led Blankenship and coworkers to suggest that this pigment is the linker to the baseplate connecting the FMO protein with the outer antenna system.Indeed, CDC calculations showed that this pigment is the most blue-shifted pigment in FMO and, thereby, completes the excitation energy funnel created by the pigment-protein coupling in this system. 48Located at the periphery of the FMO protein, the eighth pigment is found to interact with charged amino acids, where the large blue shift results from three deprotonated Asp and one deprotonated Glu.These residues are situated in the region of negative difference potential of this pigment, thereby causing the blueshift.Since in vivo the surface of the FMO protein is interacting with another protein rather than with water, the influence of the lower dielectric polarization of the former on the site energy of the eighth pigment has been investigated as well.It was found that just one titratable group in the environment of the eighth pigment changes its protonation state and that this pigment remains the most blue shifted pigment of all. 482 The light-harvesting complex LHCII of higher plants LHCII is the major light-harvesting complex in the thylakoid membrane of higher plants and can be considered the most abundant membrane protein on earth.In the native system, it forms supercomplexes with minor homologous antenna proteins and the photosystem II core complex.140,141 The latter is the site of photosynthetic water oxidation.1,2 The task of the antenna system surrounding the core complex is not only to deliver excitation energy, but also to regulate the energy flow.However, the precise role of LHCII in this regulation is still elusive.Elucidation of the 3D-structure of isolated LHCII was a major breakthrough in the field 142 and showed that LHCII is a trimeric complex, where each monomeric subunit binds eight chlorophyll (Chl) a and six Chl b pigments as well as four carotenoids of three different types and a structurally important lipid molecule.Based on this structural information, Novoderezhkin et al. 143 developed an exciton Hamiltonian for the Chl pigments in LHCII, using excitonic couplings calculated with the point-dipole approximation and fitted site energies, which is an important benchmark and allows for a simulation of stationary and time-resolved optical spectra (see also ref. 7 and 144).Application of the Poisson-TrEsp method for the excitonic couplings and the PB/QC method for the site energies largely confirmed this Hamiltonian, except for assigning lower site energies to one Chl a and one Chl b, higher site energies to two Chl b and a weaker excitonic coupling to one Chl b pair, and thus permitted us to link the exciton Hamiltonian to the molecular structure on an electrostatic basis.101 To achieve good agreement between simulated and measured linear optical spectra, it was necessary to include highfrequency vibrational pigment modes explicitly into the Hamiltonian.110,145 The Chl system of LHCII is divided into domains by virtue of a cut-off coupling V c to simulate the dynamic localization of exciton states in an implicit way.Thus, for excitonic couplings of pigments m and n belonging to different domains, it holds that V mn o V c , and exciton delocalization is allowed for only within domains.Within this model, it became possible to understand the slow energy transfer times measured in the Chl a spectral region.145 A consequence of this treatment is that also the excitonic couplings involving vibronic states are smaller than the cut-off coupling, and these states remain localized on the respective pigments. We notethat in the work of Novoderezhkin et al., 7,143,144 the high-frequency modes are treated in a different way, as they are included in the spectral density.A goal of current research is to better understand the implications of these different treatments of modes for optical lineshapes and excitation energy transfer.
Concerning energy flow in LHCII, the main result of the structure-based simulations is that the energy sink (i.e., the terminal emitter domain) is located at Chl a610 at the periphery of the LHCII trimer and at the stromal side of the thylakoid membrane, probably involving also Chls a611 and a612 at physiological temperatures.This assignment is in agreement with earlier proposals 146,147 based on the Novoderezhkin-Hamiltonian and mutagenesis studies. 148,149The terminal emitter domain is likely one of the sites in the photosystem II antenna system, where excitation energy flow is regulated.However, the simulations also revealed problems that remain to be solved.Three of these problems are: (i) possible temperature-dependent structural changes of LHCII that affect exciton states in a yet unknown way, 148 (ii) detergent-induced structural changes that cause pigment orientations to be altered in solubilized LHCII trimers compared to the crystal structure, 110 and (iii) a mismatch between simulated and measured circular dichroism spectra in the Chl b region at around 650 nm. 101,110,144
Cyanobacterial photosystem I and photosystem II
The trimeric cyanobacterial photosystem I core complex contains 96 Chl a pigments per monomer. 150This large size represents a particular challenge for theory.Attempts have been made to use the 96 site energies as parameters to be determined from a fit of optical spectra. 151,152However, an unambiguous determination of 96 site energies that is based on a fit of a few linear optical spectra is rather unlikely.On the other hand, these fits describe the spectra quantitatively, whereas early structure-based calculations 30,44 could only describe the absorbance spectrum, but failed, e.g., for the linear dichroism.A first structure-based explanation of linear absorbance and linear and circular dichroism spectra was obtained by using Poisson-TrEsp for the excitonic couplings and the CDC method, in combination with an evaluation of the protonation pattern of titratable residues of the protein, for the site energies. 47A detailed evaluation of the pigment-protein coupling revealed the importance of long-range electrostatic interactions.Most of the site energies are determined by multiple interactions with a large number (>20) of amino acid residues.Out of 78 titratable residues of the protein, 23 were calculated to be in a non-standard protonation state at pH 6.5 and 300 K (where the standard is defined as the protonation state observed at pH 7.0 in an aqueous environment).Nevertheless, the site energies obtained for the standard protonation state are within 100 cm À1 of those obtained for the nonstandard pattern, except for Chl 51, where the site energy is blue-shifted by 500 cm À1 in the non-standard protonation pattern.Interestingly, the calculations reveal a higher concentration of low-energy exciton states on the side of the A-branch of the reaction center.This finding could provide an explanation for the more frequent use of this branch in electron transfer reactions.Another finding concerns the presence of an excitation energy barrier formed by pigments that are located between the reaction center pigments and the low-energy pigments of the antennae.The latter are found at an average distance of about 25 Å from the special pair of the reaction center.Since electron exchange was not considered in this work, the identity of a few long-wavelength absorbing Chls was not obtained.A calculation scheme has been developed, 73 which seems to be a promising tool, in combination with spectroscopic data, 71 to provide these identities in future work.
Concerning photosystem II (PSII), based on calculations of optical spectra, various functional states of PSII reaction centers were identified and the overall decay of excited states by excitation energy transfer and trapping by the reaction center was found to be transfer-to-the-trap limited.][155][156] The site energies used in these calculations have been obtained from a fit of optical spectra of the CP43, CP47 and D1D2cytb559 subunits of PSII as well as from optical difference spectra of PSII core complexes.An important future goal is the direct evaluation of the site energies by structure-based calculations.Preliminary results, 107 largely supporting the earlier fits, 20 have been obtained for the CP43 subunit based on the 2.9 Å resolution structure. 157Work is in progress to exploit the most recent structural refinement at 1.9 Å resolution. 158
Summary and outlook
In the present perspective, we have put together ingredients necessary to bridge the gap between the structural data of photosynthetic PPCs and their photophysical function.It is a great advantage that the optical properties of these systems and their building blocks are so well known, and a challenge for theory.A major simplification arises from the fact that different pigments do not exchange electrons but only excitons.Therefore, a rather simple-looking PPC Hamiltonian can be used to describe the dissipative exciton motion in these systems.Nevertheless, the motion of excitation energy resulting from such a Hamiltonian can have many different characteristics, arising from the relative strength of pigment-pigment (excitonic) and pigment-protein (exciton-vibrational) coupling.Microscopic theories provide the means to parametrize the PPC Hamiltonian, which can then be used to calculate the quantum dissipative motion of excitons.The predictive power of these methods and calculation schemes of parameters is steadily increasing and allows us already to draw conclusions about building principles of photosynthetic antennae that might be useful also for the creation of artificial light-harvesting devices.For example, in the FMO protein the inhomogeneous charge distribution of the protein is responsible for creating an excitation energy funnel that guides the excitons towards the reaction center and for a fast dissipation of the excitons' excess energy.It is an important future goal of structure-based theory to reach a similar understanding of photophysical properties also for larger PPCs including their interplay in the photosynthetic membrane and their ability to switch from the light-harvesting to a photoprotection mode.
Fax: +43 732 2468 8540; Tel: +43 732 2468 8551 Thomas Renger Thomas Renger studied physics at the Humboldt University of Berlin, where he obtained his PhD degree under the supervision of Volkhard May in 1998.Afterwards he became a Feodor Lynen fellow to work with Rudy Marcus at Caltech (Pasadena) and in 2001 an Emmy Noether fellow at the Free University of Berlin.Since 2009 he has been a professor of theoretical physics at the Johannes Kepler University of Linz.His research interests include the theory of charge and excitation energy transfer and optical spectra of macromolecules, including a structure-based calculation of the parameters involved.
N À 1 counts the building blocks and b = 0, 1, 2, 3,. . . the respective electronic states.The wavefunctions |A (m) a i and |B (Z) b i are eigenfunctions of the molecular Hamiltonians H (m) A and H (Z) B of the isolated building blocks, respectively.Hence, we have H (m) A |A (m) a i = E (m) a |A (m) a i and H (Z) B |B b i = F (Z) b |B (Z) b with the respective electronic energies E (m) a of pigment m and F (Z) b of building block Z.The total Hamiltonian of the PPC reads H ¼ H coupling between pigment m and building block Z and V ðZ;Z 0 Þ BB that between building blocks Z and Z 0 .In the absence of electron exchange between building blocks, the Hartree product for the eigenfunction jc ðmÞ ab i ¼ jA ðmÞ a i Q Z jB ðZÞ b i of the Hamiltonian of isolated building blocks, with H A þ P can be used to investigate the shift DE (m) a of the electronic energies E (m) a of pigment m by the Coulomb coupling V ¼ P between building blocks.The multi index b = b 1 , b 2 ,. ..,bZ ,. . .was introduced to abbreviate the notation for the electronic states of the environment.Within perturbation theory up to second order in the Coulomb coupling V, the shift DE (m) a is given as DE ðmÞ a j = x, y, z was introduced.The physical meaning of W a now becomes clear.The electric field E ¼ q ðmÞ I 0 ða; aÞ R
Fig. 1
Fig. 1 Thermodynamic cycle for the calculation of the deprotonation free energy DG p (A m H,A m ) of a protein-bound group A m H at site m.The red and dashed circumferences symbolize the molecular surface and the ion exclusion layer, respectively, and H s the solvated proton.
Fig. 3
Fig. 3 Thermodynamic cycle for the calculation of the site energy E 0 m of a protein-bound pigment with the PB/QC method.
0), there are no background charges and only polarization terms contribute to G(1À0,opt) transition charges of the 0 -1 transitions of the pigments are placed at the atom positions R (m) I of pigment m and R (n) J of n.As the results obtained with different quantum chemical methods show, there are uncertainties about the absolute magnitude of the transition charges, whereas the relative magnitudes, characterizing the shape of the transition density, differ much less.These uncertainties can be removed by comparing the resulting first moment of the transition charges, the transition dipole moment, with the experimental vacuum transition dipole moment of the pigment.The transition charges are rescaled by a constant factor in order to reach agreement.The experimental values for different pigment types were obtained by Knox and Spring 115 from their analysis of the pigments' oscillator strengths in different solvents.
118 r
these effects can be considered in the following way: the transition charges of the pigments are placed in molecule-shaped cavities that are surrounded by a homogeneous dielectric with dielectric constant e = n 2 , which equals the square of the refractive index and represents the optical polarizability of the protein and solvent environments.A Poisson equation is solved for the potential f m (r) of the transition charges of pigment m46, ), the coupling constants g x (m,n) finally are obtained as51 o 3=2x ð2 hÞ 1=2 g x ðm; nÞ ¼
Fig. 4
Fig. 4 Average diagonal part of the spectral density JðoÞ ¼ 1 7 y k ðtÞV kl U l ðtÞ: (64) The population P m of the excited state of pigment m is obtained by performing a trace over all vibrational degrees of freedom of W ˆmm P m (t) = tr vib {W ˆmm } = tr vib {W ˆ(I) mm }. (65) A second-order perturbation theory in the excitonic coupling V (I) mn then results in the generalized rate equation d dt P m ðtÞ ¼ À< X n Z t 0 dt k m!n ðtÞP m ðt À tÞ À k n!m ðtÞP n ðt À tÞ ½ (66) where R denotes the real part.The generalized (time-dependent) rate constant k m-n (t) reads k m!n ðtÞ ¼ 2jV mn j 2 h 2 tr vib U y m ðtÞU n ðtÞ Ŵeq mm n o (67) containing the equilibrium statistical operator of the vibrational degrees of freedom of the PPC in the mth electronic state Ŵeq mm ¼ e ÀU vib ðmÞ=k B T tr vib e ÀU vib ðmÞ=k B T f g ;
R 1 À1
dtk m!n ðtÞ.Using k m!n ðtÞ ¼ k à m!n ðÀtÞ, this rate constant reads
Z 1 À1
dt tr vib U y M ðtÞ VMN U N ðtÞ VNM Ŵeq MM n o ;
Fig. 5
Fig.5Off-diagonal exciton-vibrational coupling constants g x (M, N) (M a N) are compared with diagonal coupling constants g x (M,M) of exciton states for the first 6000 normal modes of the FMO protein.51The black solid line shows the corresponding vibrational frequencies o = o x .
The Liouville-von Neumann equation for this statistical operator expanded with respect to the delocalized states ŴðIÞ VðIÞ KL ¼ e io KL t U y vib ðtÞ VKL U vib ðtÞ io MN t tr vib U y vib ðtÞ VMN U vib ðtÞ VNM Ŵeq vib n o ; This journal is c the Owner Societies 2013 Phys.Chem.Chem.15, 3348--3371 3369 | 22,688.4 | 2013-02-13T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Wnt9A Induction Linked to Suppression of Human Colorectal Cancer Cell Proliferation
Most studies of Wnt signaling in malignant tissues have focused on the canonical Wnt pathway (CWP) due to its role in stimulating cellular proliferation. The role of the non-canonical Wnt pathway (NCWP) in tissues with dysregulated Wnt signaling is not fully understood. Understanding NCWP’s role is important since these opposing pathways act in concert to maintain homeostasis in healthy tissues. Our preliminary studies demonstrated that LiCl inhibited proliferation of primary cells derived from colorectal cancer (CRC). Since LiCl stimulates cell proliferation in normal tissues and NCWP suppresses it, the present study was designed to investigate the impact of NCWP components in LiCl-mediated effects. LiCl-mediated inhibition of CRC cell proliferation (p < 0.001) and increased apoptosis (p < 0.01) coincided with 23-fold increase (p < 0.025) in the expression of the NCWP ligand, Wnt9A. LiCl also suppressed β-catenin mRNA (p < 0.03), total β-catenin protein (p < 0.025) and the active form of β-catenin. LiCl-mediated inhibition of CRC cell proliferation was partially reversed by IWP-2, and Wnt9A antibody. Recombinant Wnt9A protein emulated LiCl effects by suppressing β-catenin protein (p < 0.001), inhibiting proliferation (p < 0.001) and increasing apoptosis (p < 0.03). This is the first study to demonstrate induction of a NCWP ligand, Wnt9A as part of a mechanism for LiCl-mediated suppression of CRC cell proliferation.
Introduction
Wnt signaling is an intricate ensemble of components of both canonical and non-canonical pathways involved with various processes including cellular proliferation, differentiation, motility and polarity [1]. Although several components are shared between the canonical and non-canonical pathways, the principal point of divergence entails the utilization of β-catenin. In the off state, in the absence of Wnt ligands, β-catenin, a second messenger utilized by the canonical pathway, is produced in a constitutive manner but is degraded by the action of the destruction complex. Activation of the canonical pathway leads to inhibition of the destruction complex resulting in accumulation of β-catenin in the cytosol followed by its translocation into the nucleus. In the nucleus, β-catenin binds to transcription factors, T-cell factor/lymphocyte enhancer factor (TCF/LEF) and turns on genes relevant for cell proliferation. The non-canonical Wnt pathway does not utilize β-catenin but instead, uses various other components for its activities [1]. To prevent uncontrolled cell proliferation, the canonical pathway is stringently regulated by various inhibitors, modulators and stimulators [2].
An added layer of regulation is provided by the non-canonical pathway which also tempers canonical pathway activity [3]. Thus, activation of the non-canonical Wnt pathway stimulates cellular differentiation and opposes cellular proliferation mediated by the canonical Wnt pathway. In healthy tissues these opposing pathways act in concert to maintain homeostasis [4,5]. One of the characteristic features of malignant tissues is disturbance of Wnt pathway signaling wherein components of the canonical and non-canonical pathways are modulated in a disproportionate manner leading to dysfunction of normal homeostatic mechanisms in cancer tissues.
Most studies of Wnt signaling in malignant tissues have focused on the canonical pathway due to its role in stimulating cellular proliferation. These studies have been instrumental in demonstrating that a prominent feature of Wnt signaling dysfunction in cancer cells involves the regulation of β-catenin. Due to mutations in either the gene(s) for β-catenin or the destruction complex, the protein is relieved from regulatory constraints and is able to stimulate target gene expression in an uncontrolled manner [6]. It is important to appreciate, however, that mechanisms other than the dysregulation of β-catenin have been shown to contribute to uncontrolled proliferation in cancer [7].
Given the importance of the non-canonical pathway in normal tissue homeostasis, it is conceivable that components of the non-canonical pathway might also contribute to the abnormal proliferation of malignant tissues. Demonstration that non-canonical components are suppressed in some cancers [8][9][10] is consistent with this hypothesis. These considerations provide a rationale for studying the function of the non-canonical Wnt pathway in malignant cells and the role this may play in the development and progression of different human cancers.
Recent studies show that lithium chloride, a drug used for psychotropic disorders [11], suppresses cancer cell proliferation [12,13]. The inhibition by LiCl is contradictory to the effect observed in normal tissues. In normal tissues LiCl-mediated inhibition of GSK3β leads to stabilization of β-catenin and activation of the canonical Wnt pathway that stimulates cell proliferation [14]. In cancer tissues however LiCl has been shown to both inhibit [12] and stimulate [15] cell proliferation which suggests that mechanisms activated by LiCl in normal tissues are modulated in some cancers. The inhibition by LiCl entails various mechanisms including activation of p53 and NF-κB signaling [12,16] however very little is known about the role of the Wnt pathway in this process. Given the importance of the non-canonical Wnt pathway in constraining the canonical pathway in normal tissues, and the dysregulation of the canonical Wnt pathway in different cancers, the present study was designed to assess the effect of LiCl on human colorectal cancer cell proliferation and investigate the impact of non-canonical Wnt pathway components in these effects. A principal goal of the study was to determine whether non-canonical pathway components regulate β-catenin expression in human colorectal cancer. The results demonstrate that the stimulation of human colorectal cancer cell proliferation in vitro can be inhibited by the induction of Wnt9A that functions as a non-canonical ligand which results in suppression of β-catenin protein.
Effect of LiCl on CRC Cell Proliferation and Apoptosis
Treatment of primary short term colorectal cancer cell lines (n = 4) resulted in a concentration-dependent suppression of cell proliferation by LiCl ( Figure 1A). At 20 mM, proliferation was inhibited in 5/5 CRC lines (mean˘standard error of the mean (SEM) = 74%˘18%; p < 0.001) relative to media controls ( Figure 1B). LiCl elicited significant apoptosis in 5/5 CRC lines (p < 0.01, Figure 1C). These results suggest that signaling mechanisms elicited by LiCl that stimulate cell proliferation in normal cells may be dysfunctional in these CRC cells.
Effect of LiCl on β-Catenin Message and Protein in CRC Cells
Based on the finding of proliferation inhibition and induction of apoptosis in CRC cells treated with LiCl, studies to assess canonical Wnt pathway activity were performed. Because canonical Wnt signaling requires stabilization of β-catenin leading to stimulation of TCF/LEF mediated transcription, the effects of LiCl treatment on β-catenin mRNA and protein was determined. Relative to media controls, LiCl elicited a 2-fold decrease of β-catenin mRNA (p < 0.03; Figure 2A). Total β-catenin levels increased significantly from baseline in media treated cells (p < 0.01 and 0.001 at 48 and 72 h respectively). In response to LiCl the total β-catenin levels were significantly decreased relative to media treated cells at 72 h (p < 0.025, Figure 2B) and were lower than baseline but not in a statistically significant manner. Comparable results were obtained for measurements of the active form of β-catenin in media and LiCl-treated cells which demonstrated a precipitous drop relative to media-cultured cells by 72 h (Figure 2C,D).
Effect of LiCl on the Expression of Wnt Pathway Components in CRC Cells
Given the capacity of the non-canonical Wnt pathway to suppress canonical Wnt signaling, studies to assess the effects of LiCl on non-canonical Wnt pathway components were undertaken. Initial exploratory studies relied on evaluating the effects of LiCl on expression of genes related to the Wnt pathway. In Figure 3A, Wnt genes modulated by ≥2 folds are depicted for at least 4 of the 5 cell lines. It can be appreciated that the expression of individual Wnt pathway genes across these 5 cell lines was qualitatively variable ( Figure 3A) with the exception of two genes, namely Wnt ligand Wnt9A and naked cuticle homolog 1 (NKD1). With respect to Wnt9A, LiCl increased expression by an average of 23-folds (range = 7-48; p < 0.025; Figure 3B) while expression of NKD1 was increased by an average of 4.5-folds (p < 0.01; Figure 3B). Dkk1 expression was affected substantially by LiCl, but the effects were qualitatively inconsistent ranging from an increase of 141-fold in 1 cell line, to a decline of 15-fold in another. Other genes increased ≥2 folds in some but not all cells lines, which
Effect of LiCl on β-Catenin Message and Protein in CRC Cells
Based on the finding of proliferation inhibition and induction of apoptosis in CRC cells treated with LiCl, studies to assess canonical Wnt pathway activity were performed. Because canonical Wnt signaling requires stabilization of β-catenin leading to stimulation of TCF/LEF mediated transcription, the effects of LiCl treatment on β-catenin mRNA and protein was determined. Relative to media controls, LiCl elicited a 2-fold decrease of β-catenin mRNA (p < 0.03; Figure 2A). Total β-catenin levels increased significantly from baseline in media treated cells (p < 0.01 and 0.001 at 48 and 72 h respectively). In response to LiCl the total β-catenin levels were significantly decreased relative to media treated cells at 72 h (p < 0.025, Figure 2B) and were lower than baseline but not in a statistically significant manner. Comparable results were obtained for measurements of the active form of β-catenin in media and LiCl-treated cells which demonstrated a precipitous drop relative to media-cultured cells by 72 h (Figure 2C,D).
Effect of LiCl on the Expression of Wnt Pathway Components in CRC Cells
Given the capacity of the non-canonical Wnt pathway to suppress canonical Wnt signaling, studies to assess the effects of LiCl on non-canonical Wnt pathway components were undertaken. Initial exploratory studies relied on evaluating the effects of LiCl on expression of genes related to the Wnt pathway. In Figure 3A, Wnt genes modulated by ě2 folds are depicted for at least 4 of the 5 cell lines. It can be appreciated that the expression of individual Wnt pathway genes across these 5 cell lines was qualitatively variable ( Figure 3A) with the exception of two genes, namely Wnt ligand Wnt9A and naked cuticle homolog 1 (NKD1). With respect to Wnt9A, LiCl increased expression by an average of 23-folds (range = 7-48; p < 0.025; Figure 3B) while expression of NKD1 was increased by an average of 4.5-folds (p < 0.01; Figure 3B). Dkk1 expression was affected substantially by LiCl, but the effects were qualitatively inconsistent ranging from an increase of 141-fold in 1 cell line, to a decline of 15-fold in another. Other genes increased ě2 folds in some but not all cells lines, which achieved statistical significance by a pooled analysis, included ligands Wnt8A (p < 0.01); Wnt7A (p < 0.025); and the Wnt inhibitor SFRP4 (p < 0.01; Figure 3B). achieved statistical significance by a pooled analysis, included ligands Wnt8A (p < 0.01); Wnt7A (p < 0.025); and the Wnt inhibitor SFRP4 (p < 0.01; Figure 3B).
Effect of Wnt Ligand Synthesis/Secretion on LiCl-Mediated Proliferation Inhibition
The gene expression results revealed that several Wnt pathway components that are increased in response to LiCl are ligands. Thus, our next studies investigated the effect of blocking Wnt ligand achieved statistical significance by a pooled analysis, included ligands Wnt8A (p < 0.01); Wnt7A (p < 0.025); and the Wnt inhibitor SFRP4 (p < 0.01; Figure 3B).
Effect of Wnt Ligand Synthesis/Secretion on LiCl-Mediated Proliferation Inhibition
The gene expression results revealed that several Wnt pathway components that are increased in response to LiCl are ligands. Thus, our next studies investigated the effect of blocking Wnt ligand
Effect of Wnt Ligand Synthesis/Secretion on LiCl-Mediated Proliferation Inhibition
The gene expression results revealed that several Wnt pathway components that are increased in response to LiCl are ligands. Thus, our next studies investigated the effect of blocking Wnt ligand production on CRC cell proliferation. The production of a Wnt ligand involves several steps including acylation by porcupine, a protein-cysteine N-palmitoyltransferase that is inhibited by IWP-2 [17]. We tested various concentrations of IWP-2 on 20 mM LiCl-mediated effects on two CRC cell lines and determined that IWP-2 reversed the LiCl-mediated suppression of CRC cell proliferation in a concentration-dependent manner with 1.0 µM representing the most effective concentration. We therefore utilized 1.0 µM concentration of IWP-2 for our subsequent studies. The results reveal that IWP-2 partially but significantly reversed the suppression of CRC cells by LiCl (p < 0.05; Figure 4A) implicating a role for the production Wnt ligands in LiCl-mediated proliferation inhibition. To test this possibility further and to explore if this effect was due to a secreted ligand, culture media was generated with cells treated with LiCl in the presence or absence of IWP-2 and used to treat the matched cell lines for 72 h prior to measurement of proliferation. The results demonstrate that cells treated with conditioned media from the autochthonous LiCl-treated cell line significantly inhibited proliferation (p < 0.05, Figure 4B) relative to conditioned media from untreated cells. Furthermore, inhibition was absent in cells treated with conditioned media from LiCl-treated cells that were co-incubated with IWP-2 ( Figure 4B). production on CRC cell proliferation. The production of a Wnt ligand involves several steps including acylation by porcupine, a protein-cysteine N-palmitoyltransferase that is inhibited by IWP-2 [17]. We tested various concentrations of IWP-2 on 20 mM LiCl-mediated effects on two CRC cell lines and determined that IWP-2 reversed the LiCl-mediated suppression of CRC cell proliferation in a concentration-dependent manner with 1.0 μM representing the most effective concentration. We therefore utilized 1.0 μM concentration of IWP-2 for our subsequent studies.
The results reveal that IWP-2 partially but significantly reversed the suppression of CRC cells by LiCl (p < 0.05; Figure 4A) implicating a role for the production Wnt ligands in LiCl-mediated proliferation inhibition. To test this possibility further and to explore if this effect was due to a secreted ligand, culture media was generated with cells treated with LiCl in the presence or absence of IWP-2 and used to treat the matched cell lines for 72 h prior to measurement of proliferation. The results demonstrate that cells treated with conditioned media from the autochthonous LiCl-treated cell line significantly inhibited proliferation (p < 0.05, Figure 4B) relative to conditioned media from untreated cells. Furthermore, inhibition was absent in cells treated with conditioned media from LiCl-treated cells that were co-incubated with IWP-2 ( Figure 4B). The studies investigating Wnt pathway gene expression, ligand synthesis/secretion, and the relationship to LiCl-mediated proliferation inhibition of human CR cells are consistent with a role for a non-canonical pathway ligand in these effects. As noted above, the most consistent effect of LiCl with respect to gene expression, both qualitatively and quantitatively across all five cells lines were on the gene for the non-canonical ligand, Wnt9A. Thus, our next experiments investigated the effect of specific Wnt9A antibody on LiCl-mediated CRC cell proliferation inhibition. The results show that LiCl-mediated inhibition of CRC cell proliferation was reversed significantly in all cell lines by the specific Wnt9A antibody (p < 0.005; Figure 4C), whereas there was no effect elicited by an isotype-matched, control IgG. The studies investigating Wnt pathway gene expression, ligand synthesis/secretion, and the relationship to LiCl-mediated proliferation inhibition of human CR cells are consistent with a role for a non-canonical pathway ligand in these effects. As noted above, the most consistent effect of LiCl with respect to gene expression, both qualitatively and quantitatively across all five cells lines were on the gene for the non-canonical ligand, Wnt9A. Thus, our next experiments investigated the effect of specific Wnt9A antibody on LiCl-mediated CRC cell proliferation inhibition. The results show that LiCl-mediated inhibition of CRC cell proliferation was reversed significantly in all cell lines by the specific Wnt9A antibody (p < 0.005; Figure 4C), whereas there was no effect elicited by an isotype-matched, control IgG.
Effect of Recombinant Wnt9A Protein on Proliferation, Apoptosis and Active β-Catenin Protein Levels in CRC
The collective results implicating a role for the non-canonical ligand, Wnt9A in mediating suppression of proliferation in human CRC cells treated with LiCl was evaluated further using recombinant Wnt9A protein. Dose titration studies demonstrated that recombinant Wnt9A protein suppressed the proliferation of CRC proliferation in a concentration dependent manner which, at 500 ng/mL, led to proliferation inhibition of 24% (p < 0.001, Figure 5A). Moreover, at this concentration Wnt9A was shown to increase apoptosis (p < 0.03, Figure 5B), as well as suppress active β-catenin protein levels (p < 0.001, Figure 5C).
Effect of Recombinant Wnt9A Protein on Proliferation, Apoptosis and Active β-Catenin Protein Levels in CRC
The collective results implicating a role for the non-canonical ligand, Wnt9A in mediating suppression of proliferation in human CRC cells treated with LiCl was evaluated further using recombinant Wnt9A protein. Dose titration studies demonstrated that recombinant Wnt9A protein suppressed the proliferation of CRC proliferation in a concentration dependent manner which, at 500 ng/mL, led to proliferation inhibition of 24% (p < 0.001, Figure 5A). Moreover, at this concentration Wnt9A was shown to increase apoptosis (p < 0.03, Figure 5B), as well as suppress active β-catenin protein levels (p < 0.001, Figure 5C).
Schematic Representation of the Mechanism Involved in LiCl-Mediated Suppression of CRC Cell Proliferation
The mechanism utilized by LiCl for the suppression of CRC cell proliferation is depicted in Figure 6. It shows that the suppression of CRC cell proliferation is in part due to induction and Wnt9A that regulates β-catenin activity. Suppression of Wnt9A induction by IWP-2 or sequestering of the ligand using antibody results in reversal of LiCl-mediated effects.
Schematic Representation of the Mechanism Involved in LiCl-Mediated Suppression of CRC Cell Proliferation
The mechanism utilized by LiCl for the suppression of CRC cell proliferation is depicted in Figure 6. It shows that the suppression of CRC cell proliferation is in part due to induction and Wnt9A that regulates β-catenin activity. Suppression of Wnt9A induction by IWP-2 or sequestering of the ligand using antibody results in reversal of LiCl-mediated effects.
Discussion
Given the widely recognized action of LiCl as an activator of β-catenin through the canonical Wnt pathway resulting in stimulated cellular proliferation [14], in contrast to our demonstration of LiCl-mediated proliferation inhibition of human CRC cells from surgical specimens, the current investigation was designed to characterize LiCl's effect(s) on the Wnt pathway in human colorectal cancer. The results demonstrate that LiCl suppresses CRC cell proliferation by a mechanism that entails increased expression and secretion of a non-canonical Wnt ligand, Wnt9A in conjunction with suppression of β-catenin protein levels. Thus, LiCl-mediated inhibition of both cellular proliferation and active β-catenin protein levels were replicated by recombinant Wnt9A protein linking this non-canonical Wnt ligand to the effects of LiCl. Similarly, the reversal of these effects in LiCl-treated CRC cells by non-specific Wnt ligand synthesis/secretion inhibitor, and by specific inhibition of Wnt9A with specific antibody, further supports the contention that the activity of the canonical Wnt pathway in human CRC can be attenuated by increasing the expression of some components of the non-canonical Wnt pathway. That well tolerated agents such as LiCl have the capacity to do this in vitro suggests that a similar approach in vivo may be therapeutically beneficial.
A principal feature of aberrant Wnt signaling characteristic of malignancy is hyperactivation of the canonical pathway in association with dysregulation of β-catenin. One consequence of this aberrant regulation is attenuated degradation of β-catenin resulting in chronic activation of genes that promote cell proliferation. However, recent studies indicate that the elevation and persistence of β-catenin levels alone are insufficient to explain the abnormal proliferative behavior of cancer cells and that additional modulators are required for the development of the neoplastic phenotype [18,19]. Given the importance of the non-canonical pathway in constraining the ability of the canonical pathway to promote cell proliferation, it is conceivable that disturbances in induction and/or function of various non-canonical pathway components are involved in neoplasia. This hypothesis is supported by studies showing that non-canonical pathway genes are suppressed in cancer tissues [8][9][10].
Nevertheless, how suppression of non-canonical pathway genes is related to neoplasia is not completely understood. A plausible explanation worth further investigation is that constraints imposed by the non-canonical pathway in normal tissues are absent in cancer cells due to inhibition of non-canonical pathway components. This idea is supported by studies which show that forced expression of non-canonical components in tumor cell lines results in decreased β-catenin signaling concomitant with decreased colony formation and cellular proliferation [10].
Wnt9A, formerly known as Wnt14, has been classified as both a canonical [20,21] and a non-canonical ligand [22][23][24]. The type of signal that is activated depends on the milieu of signaling components that are expressed in cells [24,25]. Ligands that stabilize β-catenin protein leading to increased cytosolic levels which can then translocate to the nucleus and bind to TCF/LEF transcription factors to activate genes that stimulate proliferation are considered canonical. Ligands
Discussion
Given the widely recognized action of LiCl as an activator of β-catenin through the canonical Wnt pathway resulting in stimulated cellular proliferation [14], in contrast to our demonstration of LiCl-mediated proliferation inhibition of human CRC cells from surgical specimens, the current investigation was designed to characterize LiCl's effect(s) on the Wnt pathway in human colorectal cancer. The results demonstrate that LiCl suppresses CRC cell proliferation by a mechanism that entails increased expression and secretion of a non-canonical Wnt ligand, Wnt9A in conjunction with suppression of β-catenin protein levels. Thus, LiCl-mediated inhibition of both cellular proliferation and active β-catenin protein levels were replicated by recombinant Wnt9A protein linking this non-canonical Wnt ligand to the effects of LiCl. Similarly, the reversal of these effects in LiCl-treated CRC cells by non-specific Wnt ligand synthesis/secretion inhibitor, and by specific inhibition of Wnt9A with specific antibody, further supports the contention that the activity of the canonical Wnt pathway in human CRC can be attenuated by increasing the expression of some components of the non-canonical Wnt pathway. That well tolerated agents such as LiCl have the capacity to do this in vitro suggests that a similar approach in vivo may be therapeutically beneficial.
A principal feature of aberrant Wnt signaling characteristic of malignancy is hyperactivation of the canonical pathway in association with dysregulation of β-catenin. One consequence of this aberrant regulation is attenuated degradation of β-catenin resulting in chronic activation of genes that promote cell proliferation. However, recent studies indicate that the elevation and persistence of β-catenin levels alone are insufficient to explain the abnormal proliferative behavior of cancer cells and that additional modulators are required for the development of the neoplastic phenotype [18,19]. Given the importance of the non-canonical pathway in constraining the ability of the canonical pathway to promote cell proliferation, it is conceivable that disturbances in induction and/or function of various non-canonical pathway components are involved in neoplasia. This hypothesis is supported by studies showing that non-canonical pathway genes are suppressed in cancer tissues [8][9][10].
Nevertheless, how suppression of non-canonical pathway genes is related to neoplasia is not completely understood. A plausible explanation worth further investigation is that constraints imposed by the non-canonical pathway in normal tissues are absent in cancer cells due to inhibition of non-canonical pathway components. This idea is supported by studies which show that forced expression of non-canonical components in tumor cell lines results in decreased β-catenin signaling concomitant with decreased colony formation and cellular proliferation [10].
Wnt9A, formerly known as Wnt14, has been classified as both a canonical [20,21] and a non-canonical ligand [22][23][24]. The type of signal that is activated depends on the milieu of signaling components that are expressed in cells [24,25]. Ligands that stabilize β-catenin protein leading to increased cytosolic levels which can then translocate to the nucleus and bind to TCF/LEF transcription factors to activate genes that stimulate proliferation are considered canonical. Ligands that counter mechanisms that stabilize β-catenin protein levels are considered non-canonical. Based on the observations herein, and the studies cited, we conclude that Wnt9A functions as a non-canonical ligand in human CRC cells.
The substantial and consistent upregulation of Wnt9A gene expression in LiCl-treated CRC cells (ranging from 7-to 48-fold across 5 short term lines) that we observed in association with proliferation inhibition (ranging from 58% to 98%), coupled with the literature cited here, justified investigating the contribution of Wnt9A in these effects. In addition, the results obtained by: blocking ligand secretion; treatment with specific anti-Wnt9A antibody; and treatment with recombinant Wnt9A protein all point to a role for Wnt9A in the suppression of CRC cells we report herein. The suppression of β-catenin protein levels by Wnt9A which emulate LiCl effects further adds to this notion and indicates that the suppression of CRC cell proliferation is mediated in part through the activation of the non-canonical pathway that inhibits the canonical pathway. Nevertheless, while all these maneuvers produced results that were quantitatively significant and qualitatively consistent with a role for Wnt9A as a tumor suppressor, the magnitude of proliferation inhibition observed cannot be attributed to the effects of Wnt9A alone. Thus, other genes or factors must be involved.
One such possibility is the NKD1 gene. In conjunction with Wnt9A, NKD1 was the only other gene that was significantly upregulated in all cell lines. NKD1 is a protein that functions as a negative feedback inhibitor of the Wnt pathway but has the capacity to modulate both canonical and non-canonical signaling pathways [26]. It is, therefore, conceivable that this gene may act in concert with Wnt9A to enhance the suppressive effects of Wnt9A observed with LiCl. Further studies are needed to determine this possibility.
The contribution of Wnt pathway proteins that were significantly upregulated in response to LiCl in at least some of the 5 cell lines used in this study may also merit further investigation. Additional candidates based on our studies include: Dkk1, SFRP4, Wnt7A and Wnt8A. Dkk1 is a potent inhibitor of the canonical Wnt pathway as well as a target. The activation of the canonical pathway increases its expression which results in feedback inhibition of the pathway [27]. This is also true for SFRP4 which can bind to Wnt ligands and inhibit both pathways [2]. The Wnt ligands Wnt7A and Wnt8A are considered non-canonical and canonical ligands, respectively [28]. These components were not investigated in this study due to the inconsistent effects of LiCl, qualitatively and quantitatively on their genes when considering all 5 cell lines. Nevertheless, the partial reversal of the effects of LiCl in these lines by Wnt9A strongly suggests that further studies assessing the contribution of other non-canonical Wnt pathway components are justified.
The results presented herein suggest that the non-canonical Wnt ligand, Wnt9A has the potential to act as a tumor suppressor in cells that have sustained oncogenic mutations sufficient to hyperactivate β-catenin action. Studies that demonstrate decreased expression of the Wnt9A gene in different cancers including colorectal cancer are consistent with this possibility [29]. Such a role is also inferred by the demonstration that increased Wnt9A expression is related to apoptosis and cell cycle arrest [30] as well as by studies which show that knocking down expression of Wnt9A leads to increased proliferation of breast cancer cells [23]. In colorectal cancer, frameshift mutations in the Wnt9A gene sufficient to result in loss of function of the protein have been reported [31].
The principal objective for this study was to investigate the effect of various components of the non-canonical Wnt pathway in the LiCl-mediated suppression of colorectal cancer cell proliferation and to determine if such effects involve regulation of β-catenin expression. That objective was met with the discovery that the expression of a non-canonical Wnt ligand, Wnt9A was significantly increased by LiCl and that this ligand inhibited colorectal cancer cell proliferation in association with suppression of β-catenin expression ( Figure 6). These observations justify additional studies to fully comprehend the mechanism that leads to the suppression of colorectal cancer cells by LiCl. For example, the elucidation of the mechanism for the induction of the non-canonical Wnt ligand would be of vital importance since the mechanism for the expression of Wnt ligands, in general, is not fully understood. Such an understanding could lead to the development of therapeutic interventions that stimulate induction of non-canonical Wnt ligands that counteract the effects of the proliferative canonical pathway. It is conceivable that studying the effects of LiCl on GSK-3β could potentially lead to the elucidation of the mechanism utilized for the induction of the Wnt ligands.
In conclusion, this is the first study to demonstrate induction of a non-canonical ligand, Wnt9A as part of a mechanism for LiCl-mediated suppression of CRC proliferation. The LiCl-mediated suppression is related to the activation of the non-canonical pathway but the involvement of other pathways is plausible. Nevertheless, the results of this study also suggest that one consequence of malignant transformation may be the suppression of non-canonical Wnt pathway activity, or the release of canonical Wnt pathway control by non-canonical pathway components. In human CRC, our results favor the first possibility by demonstrating that stimulating the non-canonical pathway restores canonical pathway regulation. Thus, agents that upregulate the non-canonical pathway components, especially Wnt9A, and activate this pathway may be therapeutically beneficial in human colorectal cancer.
Study Design
Short term, primary cell lines were established from resected colorectal tumor specimens, classified and characterized by the attending surgical pathologist on each case. The tumors were obtained under an institutional review board approved protocol, from five patients (age range = 40-64 years) who were undergoing treatment for colorectal adenocarcinoma. Short term, primary CRC cell lines (n = 5) were established from resected tumors from patients with metastatic and/or recurrent disease. CRC were treated with LiCl (Sigma-Aldrich Corp., St. Louis, MO, USA), an activator of the canonical Wnt pathway (CWP) in the presence and absence of various Wnt pathway modulators including: IWP-2 (Sigma-Aldrich Corp.), a pan inhibitor of CWP and NCWP Wnt ligand secretion; conditioned media (CM) from LiCl˘IWP-2 treated cells; a specific antibody against Wnt ligand, Wnt9A (Santa Cruz Biotechnologies, Inc., Santa Cruz, CA, USA); and recombinant Wnt9A protein (Genemed Biotechnologies, Inc., San Francisco, CA, USA). Cell proliferation and apoptosis assays and RNA isolation for quantitative PCR gene expression array analysis were performed at 72 h. ELISA was performed on cell lysates at 24, 48 and 72 h.
Tumor Tissues Procurement from Patients
The protocol entitled "The BioSample/Tissue Repository at Midwestern Regional Medical Center, Inc." was approved by the Western Institutional Review Board, Panel #17. As per the IRB protocol, the specific requirement to access the tumors required patient's written informed consent for the tumor repository operations which include: (1) storage of the extra tissues not needed for clinical care; and (2) permission to conduct research on these tissues stipulating that all identifiers are stripped from the specimen and can only be matched to the patient and their clinical data through a code provided by the repository. The code is under the control of the principal investigator, Tan, Chair of the Department of Pathology and Laboratory Medicine, and not shared with any of the investigators who use the tissues in the repository.
Wnt Pathway Focused Gene Expression Arrays
Transcriptional expression of genes was determined on the extracted RNA using 96-well real-time PCR arrays. cDNA was synthesized using the RT 2 First Strand cDNA Kit (SABiosciences Corp., Frederick, MD, USA) according to the manufacturer's instructions. Transcriptional gene expression was performed using the Human Wnt Signaling Pathway Plus RT 2 Profiler PCR Array System (SABiosciences Corp., Frederick, MD, USA), according to the manufacturer's instructions. Real-time PCR was performed using the MyiQ Real-time PCR detection system (Bio-Rad Laboratories Inc., Hercules, CA, USA). All transcriptional gene expression analyses were performed on the SABiosciences Corp. web portal using the RT 2 Profiler PCR Array Data Analysis program (version 3.2). Transcriptional gene expression was defined as fold-change versus media controls.
ELISA for Total and Active β-Catenin
Total and active β-catenin levels were determined by ELISA by the Human Total β-Catenin DuoSet IC (R&D Systems, Minneapolis, MN, USA). To determine the active β-catenin levels, the capture antibody provided with the kit was replaced by Anti-Active-β-Catenin (anti-ABC), clone 8E7 (EMD Millipore, Billerica, MA, USA) [32]. Cells were subjected to various treatments for 24, 48 or 72 h and lysed by the addition of the lysis buffer. ELISA was performed on cell lysates according to the manufacturer's instructions.
Cell Proliferation Assay
Cell proliferation was measured by a standard MTS Assay (Promega, Madison, WI, USA). A total of 5000 cells in logarithmic growth phase were inoculated into treatment wells in quadruplicate. Cells were subjected to various treatments (indicated above) and incubated for 72 h. Three hours before the end of the incubation period, MTS reagent was added to the plates and absorbance readings were performed on a multimode plate reader (Enspire 2300-001L, PerkinElmer, Waltham, MA, USA) at 490 nm.
Apoptosis Assay
Cell apoptosis was determined using Caspase-3 Colorimetric Assay kit (R&D Systems, Minneapolis, MN, USA). Cells were subjected to various treatments for 72 h and lysed by the addition of the lysis buffer provided with the kit. The Caspase-3 assay was performed on cell lysates according to the manufacturer's instructions using a multimode plate reader (Enspire 2300-001L, PerkinElmer, Waltham, MA, USA). The results are expressed as fold change relative to media controls.
Statistical Analysis
The data were expressed as means˘standard error of the means (S.E.M). Student's t-test and One-way analysis of variance (ANOVA) with post hoc pairwise multiple comparisons using the Student-Newman-Keuls method was performed to determine significance of differences between treatments. p-Values of less than 0.05 were considered significantly different.
Conclusions
This is the first study to demonstrate induction of a non-canonical ligand, Wnt9A as part of a mechanism for LiCl-mediated suppression of CRC proliferation. Inducing components of the non-canonical Wnt pathway may be therapeutically beneficial for attenuating uncontrolled proliferative canonical signaling in malignant tissues. | 7,778.4 | 2016-04-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Multiple scattering reduction in instantaneous gas phase phosphor thermometry: applications with dispersed seeding
In this study the structured laser illumination planar imaging (SLIPI) technique is combined with gas phase phosphor thermometry to measure quasi-instantaneously two-dimensional temperature fields with reduced bias from multiple scattering. Different reconstruction strategies are implemented, evaluated and compared, including a two-pulse and one-pulse SLIPI approach. A gradient-based threshold algorithm for particle detection is applied to conventional planar light sheet imaging as an alternative to reduce the bias caused by multiple scattering in seeding-free regions. As a demonstration, measurements are performed in a canonical flow configuration, consisting of a heated, turbulent, air jet surrounded by an ambient co-flow. Both air flows are seeded with the thermographic phosphor BaMgAl10O17:Eu2+. Conventional light sheet imaging in the context of gas phase phosphor thermometry suffers from multiple scattering causing a significant temperature bias and low temperature sensitivity. Applying the gradient threshold algorithm removes areas without any seeding particles which improves accuracy, precision and temperature sensitivity. However, multiple scattering influences are still present and may cause an increasing bias particularly for higher seeding density. One pulse (1p) SLIPI exhibits high accuracy at intermediate precision. Multiply scattered luminescence is not fully removed and spatial resolution is lowered. Two pulse (2p) SLIPI is recommended for high temperature sensitivity and accuracy, removing impact of multiple scattering furthermost. However, 2p-SLIPI exhibits reduced temperature precision.
Motivation and introduction
In mechanical, chemical and process engineering, fluid flow temperature is a vital parameter. Obtaining accurate temperature information is crucial for understanding the governing physical, chemical or even biological processes, for developing novel applications and for monitoring system operations. Examples span from combustion technologies to life science where temperature sensing is of crucial importance, as in living cells [1,2] or in vivo thermometry [3] explored for disease detection in biomedical hosts.
TGPs consist of ceramic host materials doped with luminescence active rare-earth or transition metal ions [9,11]. Laser-induced excitation is in the ultraviolet, while the temper ature-sensitive emission is typically in the visible spectral range [11]. Depending on temperature, intensity and spectral distribution of the luminescence emission as well as the temporal decay time vary. These properties are utilized for temper ature sensing. Wall thermometry using TGPs was reviewed by Aldén et al [8] and Brübach et al [9] and fluid thermometry by Abram et al [10]. For fluid phase thermometry applications, the TGP tracer is dispersed as seeding powder into the flow (individual particle size several µm). In the study presented here, the phosphor BaMgAl 10 O 17 :Eu 2+ (BAM) is used [15,16], common in gas phase phosphor thermometry [11,12,14,17].
Previous investigations indicated that multiply scattered luminescence emission originating from seeding particles can strongly bias temperature measurements [12,13,17,18]. Three primary mechanisms of multiple scattering were identified: (i) Luminescence emission originating from the illuminated plane is multiply scattered and emits to the camera from off-plane locations. (ii) Incident laser light is multiply scattered to off-plane locations and excites luminescence, emitting to the camera. (iii) Incident laser light is multiply scattered to off-plane locations and excites luminescence that is subsequently multiply scattered prior to detection.
Two major effects on phosphor thermometry need to be considered. First, blurred signals originating from multiple scattering are observed within the entire image, also in seeding-free areas. Second, local temperature information is mixed.
Multiply scattered light and the resulting effects can be suppressed applying a structured illumination approach designed for macroscopic flow systems called structured laser illumination planar imaging (SLIPI), introduced to fluid mechanics by Berrocal et al [19] and Kristensson et al [20]. SLIPI is based on recording multiple sub-images resulting from spatially intensity modulated light sheets at different spatial phase shifts. The modulation pattern is retained by signal photons that are not multiply scattered ('un-scattered'), while multiply scattered light is loosing this structure, appearing as blurred offset. By SLIPI post-processing, multiple scattering contributions can be mitigated. This technique has demonstrated its capabilities in numerous applications in the past and has advanced over the years. Related to the number of sub-images recorded by p pulses, three different approaches are distinguished: (i) 1p-SLIPI, based on one spatially modulated image; (ii) 2p-SLIPI, using two spatially modulated sub-images, phase-shifted to each other by 180°; (iii) 3p-SLIPI, based on three spatially modulated sub-images, with phases of 0°, 120° and 240°, respectively.
The original three pulse (3p) approach was developed for time-averaged measurements in dense sprays [19] and further advanced in [21] for freezing rapid flow motion. Two-pulse [22][23][24] as well as one-pulse SLIPI [24,25] applications were demonstrated, simplifying the experimental complexity and cost. SLIPI was combined with various laser optical diagnostics like LIF/Mie droplet sizing [23,26], LIF thermometry [27], Rayleigh thermometry [28,29] and PIV [30]. A first combination of SLIPI with gas phase phosphor thermometry (SLIPI-LIL) was presented in [18] for time-averaged temperature field measurements. This study extends the previous SLIPI-LIL approach to quasi-instantaneous temperature field measurements in gaseous flows. A novel particle identification algorithm is introduced to distinguish signal contribution originating from multiple scattering in seeding-free regions. However, multiple scattering superimposed to particles is not suppressed by this algorithm, but only in non-seeded areas. For purpose of demonstration, investigations are performed in a canonical jet and co-flow configuration using the common phosphor BaMgAl 10 O 17 :Eu 2+ (BAM) as tracer [11,12,[14][15][16][17]. The capability of 1p-and 2p-SLIPI-LIL gas phase thermometry is benchmarked against the conventional approach, considering temperature sensitivity, accuracy and precision of each reconstruction technique.
Description of the optical techniques
To assess the influence of multiple light scattering on quasiinstantaneous gas phase thermometry, laser-induced luminescence (LIL) thermometry and structured laser illumination planar imaging (SLIPI) are combined. This section summarizes the fundamentals of these techniques.
Laser-induced luminescence thermometry
For two-dimensional gas phase thermometry based on laser-induced luminescence, a two-color ratio technique is employed. This approach derives a temperature dependent ratio by dividing the two-dimensional images I(x, y) simultaneously recorded at two different spectral ranges ∆λ1 and ∆λ2 of the luminescence emission spectrum [9,12]. The concept is outlined in figure 1. Ideally, one spectral range ∆λ is not influenced by temperature variations while the other range features a pronounced temperature dependency. The intensities I are spectrally integrated within their respective spectral ranges and temporally integrated across their temporal luminescence decay. In equation (1) influences like laser fluence, number of particles in the probe volumes or absorption cross section cancel each other. According to Lee et al [12] the ratio reduces to the respective quantum yields Φ of the luminescence process, specifying the number of luminescence photons emitted in ∆λ1 and ∆λ2, and a calibration constant C. Thus, for conversion of the two-color ratios R to temperatures T, an in situ calibration is required. Additionally, background radiation and vignetting must be corrected for [8,10].
Structured laser illumination planar imaging for multiple scattering reduction
Structured laser illumination planar imaging (SLIPI) has been established as a suitable concept for reducing multiple light scattering in flow diagnostics. SLIPI is based on recording of multiple sub-images I n resulting from n spatially intensity modulated laser light sheets at different spatial phase shifts (∆ϕ = 2π/n) [19]. Multiply scattered photons do not retain the modulation of the incident laser light sheets in the recorded sub-images, while undisturbed photons do [19]. This allows to distinguish between multiple and un-scattered photons via image post-processing such that the desired inplane signal, i.e. SLIPI image I SLIPI , can be extracted. In order to preserve spatial resolution, the original SLIPI approach was based on n = 3 sub-images for image reconstruction [19]. This approach is well-suited for averaged imaging. For instantaneous imaging [21], 3p-SLIPI requires costly equipment and careful optical alignment. As this study aims at providing instantaneous images with more affordable equipment, the one pulse (1p) and two pulse (2p) SLIPI approaches have been chosen. Both approaches are briefly outlined in the following. 2p-SLIPI requires two modulated laser light sheets with half a modulation period spatial phase shift (∆ϕ = π), resulting in the sub-images I 0 and I π [22]. For the 2p-SLIPI reconstruction, sub-images are subtracted from each other ( I 2p−SLIPI ∝ I 0 − I π ), thereby removing the multiply scattered intensity I MS [22]. The 2p-SLIPI approach contains residual line structures in the resulting I 2p−SLIPI [22], decreasing its spatial resolution. By summation of both images a so-called conventional image results ( I conv ∝ I 0 + I π ), equal to an image originating from classical planar laser imaging without spatial modulation and resolution losses but biased by multiple scattering. In the following, I conv is used as a benchmark to evaluate the results from SLIPI reconstructions. The 1p-SLIPI approach by Mishra et al [24] aims for even further reducing experimental and post-processing efforts. Multiple scattering is suppressed using only one sub-image (either I 0 or I π , here: I 0 ). The main idea of 1p-SLIPI is that multiple scattering blurs the images such that its influence is characterized by low spatial frequencies. Transforming an intensity modulated sub-image I 0 to its Fourier space (F ), two prominent features are observed, as shown in figure 2(a): (1) a component centered at the origin of the Fourier space representing the averaged image intensity [31]; (2) a component at frequency ν corresponding to the modulation frequency of the SLIPI light sheet. The majority of multiply scattered light is located in the central part of the Fourier spectrum at low frequencies. By applying a modulation, the un-scattered information is duplicated around this frequency (ν , see figure 2). Thus, by using a sufficiently high frequency ν , the desired unscattered information can be isolated from most of the multiple light scattering component. Photons that are not multiply scattered preserve the intensity modulation of the light sheet, thus, signal contained in the frequency component ν corresponds to signals unbiased by multiple scattering. For reconstruction, two different approaches are common [24]: the lock-in detection and the detection of the first-order peak. Here the latter approach is followed as outlined in figure 2(b). The entire Fourier spectrum is shifted in the v-direction to offset one ν component to the center [24]. A low-pass filter G is applied removing the averaged intensity contribution which contains multiply scattered light (see figure 2(c)). By back-transformation to physical space, the filtered image I 1p−SLIPI is significantly less influenced by multiple scattering. The inverse Fourier-transformation (F −1 ) is mathematically expressed by where u and v are spatial frequencies and G is the low-pass filter in the frequency domain.
Note, that I 2p−SLIPI , I 1p−SLIPI and I conv all can be calculated from the identical pair of quasi-simultaneous raw sub-images I 0 and I π . This allows for a direct comparison and evaluation of these approaches.
Experimental methodology
The combination of quasi-instantaneous two-color ratio LIL thermometry and SLIPI is applied to a canonical jet in co-flow configuration. The experimental setup of this study is shown in figure 3.
Flow configuration
The canonical flow configuration introduced in [12] and [18] consists of a circular jet (16 mm diameter), surrounded by a co-flow of 140 mm in diameter (side view in figure 3(b)). Jet and co-flow are operated with air seeded with TGP particles of BaMgAl 10 O 17 :Eu 2+ (BAM, Phosphor Technology Ltd., KEMK63/UFP2). The jet is heated and operated in the turbulent regime (Re = 6400), while the co-flow is at ambient temper ature and laminar conditions (bulk velocity ≈0.2 m s −1 ). The flow configuration is operated in steadystate and controlled by commercial mass flow controllers (Bronkhorst High-Tech). Flow temperatures are continuously monitored using type K thermocouples.
Laser optical setup
The laser optical setup shown in figure 3 is designed to perform quasi-instantaneous two-dimensional 2p-SLIPI-LIL gas phase thermometry measurements.
Two independent laser pulses are provided by two Q-switched Nd:YAG lasers (third harmonic at 355 nm, 5 Hz repetition rate). The time separation between both laser pulses is 18 µs, resulting in quasi-instantaneous temperature field measurements. This value is chosen exceeding the ≈2 µs lifetime of BAM at ambient conditions [16] to avoid any bias from sequential excitation of the TGP particles. The two laser pulses are passing individual Galilean telescopes and rectangular apertures to select beam regions with approximately homogeneous laser intensity. The Ronchi gratings (Edmund Optics, 5 line pairs/mm) create individual diffraction patterns for each beam path. The spatial phase shift between both modulated SLIPI light sheets is aligned by vertically shifting one of the Ronchi gratings [18]. Both pulses are overlapped and focused on a so-called frequency cutter device (side view in figure 3(a)), which selects both ±1st diffraction orders while blocking all remaining orders. By overlapping these two resulting coherent beams after the frequency cutter, a sinusoidal beam profile is generated by interference, finally resulting in a SLIPI laser light sheet. This method to generate the sinusoidal modulated light sheet pattern is called fringe projection [32]. Light sheets are shaped regarding modulation period and thickness using cylindrical lenses prior to entering the field-of-view (FOV). Within the FOV, the SLIPI light sheets are 285 ± 45 µm thick and have a modulation period of 270-330 µm. Average laser pulse energies are 0.43-0.52 mJ with 15-22 µJ standard deviation (energy statistics based on 500 single laser pulses).
The resulting laser-induced luminescence is collected by an achromatic lens and spectrally separated by a dichroitic beam splitter (Chroma Technology Corp., T425LPXR) into a blue (λ < 425 nm, reflected) and red channel (λ > 425 nm, transmitted), corresponding to ∆λ1 and ∆λ2 in equation (1). To more accurately define the spectral ranges, additional bandpass interference filters (Chroma Technology Corp., compare figure 3 for center wavelength/full width half maximum) are placed in front of each objective lens (f-number f # 1.4, focal length 85 mm). Each channel is recorded by an individual interline-transfer CCD camera (PCO AG, sensicam qe double shutter). Both cameras were operated in a double shutter mode. The first frame (5 µs exposure time) records sub-image I 0 , the second subsequent frame (nearly 100 ms exposure time, limited by readout time of the first frame) captures I π . To avoid any bias from ambient background in I π , the camera system is shuttered by a box and the ambient light is switched off during recordings. Recordings are performed with 5 Hz repetition rate. The entire measurement system is synchronized using a digital delay pulse generator (Quantum Composers Inc.).
Data post-processing
Data post-processing for deriving quasi-instantaneous twodimensional temperature fields consists of three steps. The workflow is outlined in figure 4. First, an image reconstruction post-processing is executed. Second, a gradient based threshold criterion is applied to the reconstructed images to exclude any bias from seeding-free regions. Finally, a thermometry routine converts intensity information into temper ature information. All procedures are implemented in MATLAB R2016a (The MathWorks, Inc.).
Image reconstruction
The image reconstruction algorithm provides three alternative ways to process the raw sub-images I 0 and I π according to the outlines in section 2.2: (1) the conventional I conv , (2) the 2p-SLIPI I 2p−SLIPI and (3) the 1p-SLIPI reconstruction I 1p−SLIPI . All image reconstructions are based on an identical pair of quasi-simultaneous raw sub-images I 0 and I π and are done for the blue and red camera channel separately, using the identical algorithms.
The conventional and 2p-SLIPI reconstruction are using both sub-images I 0 and I π . A more detailed description of the algorithmic implementations of these reconstructions is given in [18]. The procedure is briefly summarized in the following. Initially, intensity correction steps are performed, compensating for temporal and spatial intensity fluctuation between the sub-images, e.g. originating from the laser or fluctuating seeding densities. In a subsequent step, the two sub-images are reconstructed via addition to a conventional image I conv or subtraction to a SLIPI image I 2p−SLIPI (section 2.2). Finally, low-pass filtering is applied to remove residual line structures inherently resulting in the 2p-SLIPI approach. For consistency, this is equally performed for I conv .
1p-SLIPI is based on using only the sub-image I 0 . Following the procedure outlined in section 2.2, the subimage is transfered to the frequency domain using a fast Fourier transformation (FFT). The upper frequency component ν (figure 2) is shifted to the center of the Fourier domain (MATLAB function circshift). A second order Butterworth low-pass filter is applied to remove all frequency components except the centered ν -component, mainly containing light not biased by multiple scattering. After filtering, the image is finally transformed back to the physical space using an inverse FFT.
Gradient threshold
In gas phase TGP thermometry, the desired temperature information is emitted from individual seeding particles. Consequentially, seeding-free areas must be removed. Signal observed in those areas is attributed to multiple scattering or wall reflections. Zones with no seeding need to be excluded from further processing. For this purpose, a gradient based threshold algorithm is applied to the reconstructed images, distinguishing between seeded and non-seeded regions. The algorithm uses approximated first and second order derivatives to separate particles from their background [33]. More details are presented in the appendix. In the case of 2p-SLIPI, the gradient based threshold criterion is complemented by using a subsequent global thresholding. The global thresholding considers absolute count values (I(x,y ) > T 0 with empirical T 0 ). This was required in the case of 2p-SLIPI to remove a remaining low-signal corona surrounding individual particles. Furthermore, this ensured sufficiently high signal levels for further processing.
Thermometry
In the last step temperature fields are derived from the reconstructed and thresholded images [18]. The reconstructed blue and red channel images are physically matched on a pixel-bypixel base to ensure an accurate calculation of the two-color ratio. The matching is performed in two steps. First, the individual images are dewarped using correction polynomials calculated from calibration target recordings by DaVis (LaVision GmbH). Second, MATLAB algorithms for image warping are applied. Subsequently, a software-binning is performed to increase the signal-to-noise ratio (6 × 6 for conventional and 2p-SLIPI, 10 × 10 for 1p-SLIPI). This results in a final spatial resolution of 180 µm for the conventional and 2p-SLIPI approach and 320 µm for 1p-SLIPI. The spatial resolution of 1p-SLIPI is limited by the required low-pass filtering in the Fourier space that is inherently determined by the modulation frequency ν of the SLIPI light sheet (see figure 2). In this study, the experiment and thus the modulation frequency ν is designed for a 2p-SLIPI realization [22] such that 2ν is close to the resolution limit of the detection system. This limits the spatial resolution of 1p-SLIPI here. In general, a high spatial modulation frequency ν is essential for the spatial resolution in 1p-SLIPI [24]. The two-color ratio is calculated by dividing the blue and red channel images pixel-by-pixel. A flat-field correction is performed and finally a conversion to temperature fields is done by applying the calibration curves.
Results and discussion
Results of different quasi-instantaneous SLIPI-LIL thermometry approaches are presented and discussed in terms of reconstructed images, calibration curves, temperature fields, accuracy, precision and signal-to-noise ratio (SNR).
Image reconstruction
Results from different image reconstructions shown in figure 5 are evaluated regarding their apparent multiple scattering impact. For discussion, zoomed regions of the shear layer between jet and co-flow are considered using only the red camera channel. The blue channel features similar characteristics and is not shown here. The emphasis in this discussion is on the effect of the applied gradient threshold criterion (zoomed view in figure 5 top row: no threshold applied, bottom row: gradient threshold applied).
Without applying thresholding the conventional case ( figure 5(a)) is strongly affected by multiple scattering as evident from rather high signal levels in areas where apparently no seeding particles are present. 2p-SLIPI ( figure 5(b)) is capable to remove most of the multiple scattering evident from a high contrast observed between particles and background. Compared to 2p-SLIPI the 1p-SLIPI (figure 5(c)) approach apparently features a less efficient suppression of multiple scattering. This is evident from inspecting zones where the laser sheets hit the jet nozzle, creating bias by wall-bounded luminescence (particles sticking at the wall) and backscattering into the co-flow (figure 5 ①) or from individual vortices that are free of any seeding ( figure 5 ②).
Particularly, vortical structures containing areas without any seeding particles illustrate the need for particle identification, as feasible with the applied gradient threshold algorithm. Comparing images with and without thresholding for the conventional case (figure 5(a)) shows that areas not containing any seeding particles exhibit significant signal levels (5-40 counts in the shear layer, above 60 counts in the jet core) that furthermore possess strong local variations. These apparent signals are related to multiple scattering and cause a significant bias. Thus, seeding-free areas need to be excluded. In the 2p-SLIPI reconstruction, the seeding-free zones exhibit low signal up to five counts (shear layer as well as jet core) at very high spatial homogeneity. The 1p-SLIPI features 5-10 remaining counts in the seeding-free zones and up to 15 counts in the jet. Here, the local variations are more pronounced compared to the 2p approach.
Results presented in the following are further processed using the gradient-thresholded raw images. However, the nonthresholded conventional case is still used as a 'worst case benchmark'.
Temperature quantification of luminescence signal
To derive temperatures from the recorded luminescence signals using two-color ratios, an in situ calibration procedure is required. The calibration is linking the computed two-color ratios with temperatures measured here with a thermocouple (type K, 1.5 mm diameter). Calibration curves for quasiinstantaneous SLIPI-LIL thermometry are shown in figure 6 for the different image reconstruction procedures and varying apparent co-flow seeding densities. The 2p-SLIPI approach features highest temperature sensitivity, followed by the gradient-thresholded conventional, 1p-SLIPI and finally the nonthresholded conventional approach. The reduced sensitivity of 1p-SLIPI compared to the gradient-thresholded conventional originates from low-pass filtering in the Fourier space image processing. 1p-SLIPI derived from sub-images originally designed for 2p-SLIPI applications inherently requires strong filtering, thus smoothing more strongly particle signal intensities with surrounding background or multiply scattered signal. 2p-SLIPI appears to be very robust against varying seeding densities in the co-flow, i.e. an increased multiple scattering level (compare blue curves in figures 6(a) and (b)). This is in contrast to the other approaches. Particularly, the non-thresholded conventional approach is strongly dependent on the seeding density such that calibration and subsequent measurement of unknown temperature fields must be conducted for similar seeding densities. In practice, this is often difficult to achieve.
Temperature fields
Based on the identical raw data, figure 7 shows typical twodimensional, quasi-instantaneous temperature fields using the different reconstruction approaches. For discussion, zoomed regions of the shear layer between jet and co-flow are considered. Temperature fields are calculated using the respective calibration curves shown in figure 6(a). For the operation conditions shown, the jet core temperature was set to 453 K. As worst case benchmark, the temperature field derived from a non-thresholded conventional reconstruction is shown in figure 7 includes schematics to describe the impact of the applied gradient thresholding. For thermometry, signals of individual pixels need to be binned to interrogation areas prior to two-color ratio calculation (section 4.3), resulting in locally averaged temperatures T . In the thresholded cases (figures 7(a)-(c)) multiply scattered signal in seeding-free regions is excluded from further processing using so-called not-a-number values (NaN, hatched area in schematic figure 7(a)). Pixels with NaN values are dummies which are not longer considered for any further mathematical operation in the processing. In the nonthresholded case (figure 7(d)) this signal is retained (light gray area) and mixed with particle information (dark gray area). This post-processing procedure is biasing temper atures additionally to the physical effect of mixing temper ature information described in section 1. Within the exemplary interrogation area in the shear layer, the averaged temperature is T = 421 K for the thresholded conventional (figure 7(a)) and T = 396 K for the non-thresholded conventional (figure 7(d)). Here, the influence of multiple scattering appears to underestimate temper atures. However, this behavior is opposite in the jet core. Both the 2p-( figure 7(b)) and 1p-SLIPI (figure 7(c)) approach apparently overestimate jet temperature in the jet least of all (colored gray and white), indicating a higher accuracy than the conventional approach. The number of overestimated temperature samples increases for the conventional (figure 7(a)) and reaches its maximum for the nonthresholded conventional case ( figure 7(d)). In the latter case, any multiple scattering and background signal in seeding-free areas is erroneously converted to strongly biased temperature. Accordingly, with a higher influence of multiple scattering the number of overestimated or underestimated temperatures rises in the presented cases. These findings are observed both globally (e.g. region ① in jet core) and for local flow structures (e.g. region ②). In accordance with section 5.1, local temperature gradients are well preserved applying the threshold algorithm.
To evaluate the different approaches in terms of temperature accuracy and precision, figure 8 shows probability density functions (PDF) that are extracted from a triangle area located in the jet core (triangle highlighted in figure 7(a)) where the temperature is homogeneous and unaffected by mixing (height of triangle ≈0.7 · d jet ). To estimate accuracy, the difference between the jet core set-temperature of 453 K (measured by type K thermocouple) and the average temperature within the extracted triangle is used (peak of Gaussianfitted distribution curve). The full width half maximum (FWHM) of the temperature distribution within the extracted triangle serves as a measure for precision. 1p-and 2p-SLIPI cases provide highest accuracies with systematic errors as low as 2 K and 4 K respectively. However, their precision of 35 K (1p) and 72 K (2p) is rather poor and suffers from minor SNR values, especially in the 2p-SLIPI approach. The conventional case features an intermediate accuracy with deviations of 14 K and a precision of 25 K. The non-thresholded conventional case possesses both a reduced accuracy (bias of 19 K) and precision (45 K). The signal unaffected by multiple scattering (Sun−scattered) and multiply scattered intensity (S MS ) contribute to the total signal S. Consequently, the apparent SNR can be defined as The noise N originating from the detection system, i.e. cameras, is considered for calculation here. It is not influenced by the image reconstruction. As both 1p-and 2p-SLIPI reduce multiple scattering, the signal S is reduced while N = const., resulting in lower apparent SNR values for these approaches. SNRs are further evaluated and results are summarized in table 1. For estimating noise, zone Ⓐ in figure 7(b) is utilized. The jet signal is extracted from the jet core (zone Ⓑ in figure 7(b)), while co-flow signal is taken from zone Ⓒ. Averaged values in the respective areas are used. As 2p-SLIPI is based on a subtractive reconstruction (section 2.2), it results in lowest overall signal intensities and therefore in lowest SNR values. For the 1p-SLIPI reconstruction only one sub-image is used, resulting in approximately half the overall available particle information for the image reconstruction compared to 2p-SLIPI. However, the 1p approach yields SNRs slightly better than those for 2p-SLIPI. The subtraction in 2p-SLIPI reconstruction balances the increased overall signal. In addition, using SLIPI, intensities are reduced by filtering lower frequency components, attributed to multiple scattering. SNRs in the conventional case are not affected by the SLIPI reconstruction procedure such that the SNRs are overall larger. However, for the non-thresholded conventional approach SNR values are weak in areas with sparse seeding. In the case considered here this is observed particularly in the co-flow region.
Summary and conclusion
In the present study structured laser illumination planar imaging (SLIPI) is combined with gas phase phosphor thermometry to derive quasi-instantaneous, two-dimensional temperature fields that are less biased by multiple scattering than conventional phosphor thermometry. 1p-and 2p-SLIPI are compared to the conventional approach and its advancement based on a gradient threshold removing areas without any seeding particles prior to further image processing. The main findings are summarized as follows: (a) The non-thresholded conventional approach suffers from areas where no or varying amounts of seeding particles are present. This results in a low temperature sensitivity, a significant temperature bias (low accuracy) and low precision. (b) Applying the gradient threshold algorithm removes areas containing any particles from post-processing. This reduces the impact of multiple scattering and improves accuracy, precision and temperature sensitivity. However, at high seeding densities the bias is not entirely avoided. (c) 1p-SLIPI exhibits high accuracy at intermediate precision. Multiply scattered luminescence is not fully removed and, inherent to the 1p-SLIPI approach, spatial resolution is decreased. The experimental complexity is less than for 2p-SLIPI. (d) 2p-SLIPI is recommended for high temperature sensitivity and accuracy, removing most reliably the impact of multiple scattering. As a significant shortcoming, 2p-SLIPI has lowest SNR accompanied by poor precision. To overcome this limitation, signal intensities particularly of the blue camera channel needs to be improved, for example by selecting different filter combinations. Another drawback of the 2p-SLIPI approach is its experimental complexity and costs. The optical alignment, however, can be greatly simplified using a calcite crystal approach [23].
In summary, SLIPI is a valuable approach significantly reducing multiple scattering in gas phase phosphor thermometry. SLIPI is mandatory for complex geometries with dense or spatial-temporally seeding fluctuations where the bias can not be easily corrected for by the calibration procedure. For sparse or constant seeding densities, specifically in canonical geometries, the influence of multiple scattering can be reduced by calibration using comparable seeding conditions as in the actual measurement and structured illumination may be not necessary. Future studies using SLIPI-LIL for gas phase thermometry should focus on improving temperature precision and simplifying the experimental complexity. Also a further extension of SLIPI-LIL with a SLIPI-PIV approach, as shown in [30], appears very promising for simultaneous velocimetry. | 6,914.6 | 2019-04-10T00:00:00.000 | [
"Physics"
] |
Reactivity of Zinc Halide Complexes Containing Camphor-Derived Guanidine Ligands with Technical rac-Lactide
Three new zinc complexes with monoamine–guanidine hybridligands have been prepared, characterized by X-ray crystallography and NMR spectroscopy, and tested in the solvent-free ring-opening polymerization of rac-lactide. Initially the ligands were synthesized from camphoric acid to obtain TMGca and DMEGca and then reacted with zinc(II) halides to form zinc complexes. All complexes have a distorted tetrahedral coordination. They were utilized as catalysts in the solvent-free polymerization of technical rac-lactide at 150 ◦C. Colorless polylactide (PLA) can be produced and after 2 h conversion up to 60% was reached. Furthermore, one zinc chlorido complex was tested with different qualities of lactide (technical and recrystallized) and with/without the addition of benzyl alcohol as a co-initiator. The kinetics were monitored by in situ FT-IR or 1H NMR spectroscopy. All kinetic measurements show first-order behavior with respect to lactide. The influence of the chiral complexes on the stereocontrol of PLA was examined. Moreover, with MALDI-ToF measurements the end-group of the obtained polymer was determined. DFT and NBO calculations give further insight into the coordination properties. All in all, these systems are robust against impurities and water in the lactide monomer and show great catalytic activity in the ROP of lactide.
Jeong and co-workers synthesized in situ diisopropoxide zinc complexes with camphorylimine and produced heterotactic-enriched PLA with a narrow dispersity and good control at 25 • C in THF [59,60].A low dispersity and very fast polymerization in bulk or solution were achieved with carbene zinc complexes by Tolman et al.The problem with this is that only purified lactide was used [62,63].Superior robustness for lactide polymerization can be achieved with zinc guanidine complexes.As an example, zinc hybridguanidine/bisguanidine complexes show high activity in bulk and with technical rac-lactide at 150 • C [69,70].Moreover, great activity can be reached with robust zinc complexes containing N,O donor functionalities [71].With quinoline-guanidine bis(chelate) triflato complexes high activity with technical lactide and good molar masses can be reached.However, the comparable zinc chlorido complexes did not show any activity [72][73][74].
Herein, we report the synthesis and the full characterization of three new chiral monoamine-guanidine hybrid zinc complexes.As starting material, camphoric acid was used.These complexes were tested in the solvent-free polymerization of rac-lactide at 140 • C or 150 • C with or without a co-initiator.All complexes show high activity in the ROP of lactide and produce a polymer in a very short reaction time.DFT and NBO calculations were performed to obtain insight into the coordination properties.These systems provide an excellent, sustainable alternative to commercially used tin complexes.
The ligands were used in complex synthesis with anhydrous zinc chloride/bromide in dry THF to obtain crystals suitable for X-ray crystallography (Figure 2).Three zinc halide complexes, C1 [Zn(TMGca)Cl2], C2 [Zn(DMEGca)Cl2], and C3 [Zn(DMEGca)Br2], were obtained.In all complex systems, two molecules are in the asymmetrical unit.Since both conformers show similar bond lengths and angles, only one was used for more precise examinations.In Table 1 the bond lengths and angles of these complexes are summarized.The zinc atom is fourfold coordinated by one primary amine, one guanidine moiety, and two halide atoms.Due to the different coordination strength, the bond lengths for Zn-Ngua (1.986(3), 2.013(4), 2.065(5) Å) are shorter than for Zn-Namine (2.053(3), 2.065(4), 2.041(5) Å).This trend can be seen in recent studies [69,78].The angle of ZnN2 is around 100° and the angle between the coordination planes {∡ (ZnX2, ZnN2)} is 80.3°-83.4°, in good agreement with the value expected for an ideal tetrahedral geometry of 90°.All complexes are tetrahedrally distorted coordinated, which can be shown by the structural parameter τ4.A value of 0 indicates a square-planar coordination, whereas a value close to 1 indicates a tetrahedral environment [79].In all complexes the τ4-value is between 0.87 and 0.89.The delocalization of the guanidine double bond is best described by the structural parameter ρ [80].A value of 1 shows a completely delocalization.The value could be determined by the ratio of the Cgua-Ngua bond length to the sum of the Cgua-Namine bond.For all complexes the value is 0.96/0.97,which shows moderate delocalization.The intraguanidine twist is defined as the angles between the planes of Ngua-Namine-Namine and Cgua-Camine-Camine.In C1 the averaged angle lies in the region of 35.6°, whereas in C2 and C3 the value is 13.1° and 7.1°.In C2 and C3 the free rotation in the guanidine moiety is hindered, which leads to smaller angles [69][70][71].All complexes were fully identified by NMR, IR spectroscopy, and MS measurements.They are stable under air and do not hydrolyze.The ligands were used in complex synthesis with anhydrous zinc chloride/bromide in dry THF to obtain crystals suitable for X-ray crystallography (Figure 2).Three zinc halide complexes, C1 [Zn(TMGca)Cl 2 ], C2 [Zn(DMEGca)Cl 2 ], and C3 [Zn(DMEGca)Br 2 ], were obtained.In all complex systems, two molecules are in the asymmetrical unit.Since both conformers show similar bond lengths and angles, only one was used for more precise examinations.In Table 1 the bond lengths and angles of these complexes are summarized.The zinc atom is fourfold coordinated by one primary amine, one guanidine moiety, and two halide atoms.Due to the different coordination strength, the bond lengths for Zn-N gua (1.986(3), 2.013(4), 2.065(5) Å) are shorter than for Zn-N amine (2.053(3), 2.065(4), 2.041(5) Å).This trend can be seen in recent studies [69,78] • , in good agreement with the value expected for an ideal tetrahedral geometry of 90 • .All complexes are tetrahedrally distorted coordinated, which can be shown by the structural parameter τ 4 .A value of 0 indicates a square-planar coordination, whereas a value close to 1 indicates a tetrahedral environment [79].In all complexes the τ 4 -value is between 0.87 and 0.89.The delocalization of the guanidine double bond is best described by the structural parameter ρ [80].A value of 1 shows a completely delocalization.The value could be determined by the ratio of the C gua -N gua bond length to the sum of the C gua -N amine bond.For all complexes the value is 0.96/0.97,which shows moderate delocalization.The intraguanidine twist is defined as the angles between the planes of N gua -N amine -N amine and C gua -C amine -C amine .In C1 the averaged angle lies in the region of 35.6 • , whereas in C2 and C3 the value is 13.1 • and 7.1 • .In C2 and C3 the free rotation in the guanidine moiety is hindered, which leads to smaller angles [69][70][71].All complexes were fully identified by NMR, IR spectroscopy, and MS measurements.They are stable under air and do not hydrolyze.
Inorganics 2017, 5, 85 3 of 19 via a Curtius rearrangement [75].The addition of one equivalent of Vilsmeier salt and the presence of trimethylamine leads to the monoamine-guanidine hybridligands L1 (TMGca) and L2 (DMEGca) [76,77] (Figure 1).The ligands were used in complex synthesis with anhydrous zinc chloride/bromide in dry THF to obtain crystals suitable for X-ray crystallography (Figure 2).Three zinc halide complexes, C1 [Zn(TMGca)Cl2], C2 [Zn(DMEGca)Cl2], and C3 [Zn(DMEGca)Br2], were obtained.In all complex systems, two molecules are in the asymmetrical unit.Since both conformers show similar bond lengths and angles, only one was used for more precise examinations.In Table 1 the bond lengths and angles of these complexes are summarized.The zinc atom is fourfold coordinated by one primary amine, one guanidine moiety, and two halide atoms.Due to the different coordination strength, the bond lengths for Zn-Ngua (1.986(3), 2.013(4), 2.065(5) Å) are shorter than for Zn-Namine (2.053(3), 2.065(4), 2.041(5) Å).This trend can be seen in recent studies [69,78].The angle of ZnN2 is around 100° and the angle between the coordination planes {∡ (ZnX2, ZnN2)} is 80.3°-83.4°, in good agreement with the value expected for an ideal tetrahedral geometry of 90°.All complexes are tetrahedrally distorted coordinated, which can be shown by the structural parameter τ4.A value of 0 indicates a square-planar coordination, whereas a value close to 1 indicates a tetrahedral environment [79].In all complexes the τ4-value is between 0.87 and 0.89.The delocalization of the guanidine double bond is best described by the structural parameter ρ [80].A value of 1 shows a completely delocalization.The value could be determined by the ratio of the Cgua-Ngua bond length to the sum of the Cgua-Namine bond.For all complexes the value is 0.96/0.97,which shows moderate delocalization.The intraguanidine twist is defined as the angles between the planes of Ngua-Namine-Namine and Cgua-Camine-Camine.In C1 the averaged angle lies in the region of 35.6°, whereas in C2 and C3 the value is 13.1° and 7.1°.In C2 and C3 the free rotation in the guanidine moiety is hindered, which leads to smaller angles [69][70][71].All complexes were fully identified by NMR, IR spectroscopy, and MS measurements.They are stable under air and do not hydrolyze.USV Symbol Macro(s) Description 0.97 0.96 0.97 Guanidine twist [c] 35.6 13.1 7.1 (b+c) with a = d(C gua -N gua ) and b and c = d(C gua -N amine ) [80]; [c] The dihedral angles between the planes represented by N gua , N amine , N amine and C gua , C Alk , C Alk .Two twist angles for each guanidine moiety.Average value of dihedral angles.
Density Functional Theory Calculations
The electronic structures of the complexes can be modeled by DFT calculations to obtain more precise insights into the donor properties of the ligands.A benchmark of similar complexes was performed in earlier reports [69].All studies were performed by DFT with the functional TPSSh [81], def2-TZVP [82] as basis set, and empirical dispersion correction with Becke-Johnson damping GD3BJ [83][84][85] in the solvent acetonitrile (as SMD model).The results are shown in Table 2.The bond lengths are predicted to be longer than the experimental values, although the relative trend is well reproduced.The bond angles and structural parameter τ 4 and ρ are in good agreement with the values obtained from the crystal structures.The DFT calculations show as well as before that the guanidine nitrogen atoms lead to shorter Zn-N gua bond lengths, which shows the stronger donor strength in comparison to the N amine atom. c] 35.6 14.5 6.2 (b+c) with a = d(C gua -N gua ) and b and c = d(C gua -N amine ) [80]; [c] The dihedral angles between the planes represented by N gua , N amine , N amine and C gua , C Alk , C Alk .Two twist angles for each guanidine moiety.Average value of dihedral angles.
To get better insights into the donor-acceptor interactions of the complexes, NBO (natural population analysis) calculations were performed.The optimized structures obtained were used to calculate the charge transfer energies (by second-order perturbation theory) and the NBO charges [86][87][88].Normally, these two sets of computed values allow an estimation of the relative donor strength [78].The influence of the electronic effects can be reflected well by the NBO charges, but these do not represent absolute charges.In Table 3 the NBO charges on Zn, N amine , and N gua are depicted.The calculated charges on the zinc atoms lie in the range of 1.49-1.56and the donating nitrogen atoms of the primary amines have very strong negative charges (−0.95), whereas the charges of the nitrogen atoms of the guanidine moiety are between (−0.76) and (−0.79).The N gua atom appears less basic than the N amine atoms.This is in contrast to previous studies, where the N gua donor is more basic and the stronger donor when compared to amine donors [69].In all complexes, the Zn-N bonds were identified by NBO as covalent bonds; hence, no donor-acceptor interaction energies could be obtained.Together with the short Zn-N gua bonds, the picture seems mixed: guanidines are strong donors AND highly basic, but here, NBO predicts that amine will be more basic.Hence, in this case, the DFT analysis does not help elucidate the question of the donor strength.Further studies on amine-guanidine complexes are needed in the future.
Polymerization Experiments
Initially, complexes C1-C3 were tested in the activity of the ring-opening polymerization of technical, unsublimed rac-lactide under bulk conditions without any co-initiator at 150 • C (polymerization method a, Scheme 1, Table 4).The use of technical lactide and the high temperatures should reflect the industrial relevant conditions [9].The start of the measurement is when the lactide monomer was melted.With one chlorido complex, two more kinetic measurements at 140 • C under stirring (400 rpm) were carried out (polymerization method b and c) (Table 4).Benzyl alcohol was added to one polymerization to determine if it can open the ring of the lactide monomer and to see the impact on the molar masses (polymerization method c) (Table 4).In recent reports [70] it could be shown that the quality of lactide (technical or recrystallized) has no major impact on the activity of the polymerization.As a result, in polymerization methods b and c different qualities of lactide are used (Table 4).More details for the individual measurements are shown below (Table 4).After each measurement the conversion was determined by 1 H NMR or FT-IR spectroscopy and the molar mass and dispersity D have been measured by GPC (gel permeation chromatography).The reaction constant k app equals the slope of the linear fit of the semilogarithmic plot of the concentration against time.The technical lactide was stored at −33 • C in a glove box to guarantee the same conditions at every measurement.Every measurement has been performed at least twice.
of the nitrogen atoms of the guanidine moiety are between (−0.76) and (−0.79).The Ngua atom appears less basic than the Namine atoms.This is in contrast to previous studies, where the Ngua donor is more basic and the stronger donor when compared to amine donors [69].In all complexes, the Zn-N bonds were identified by NBO as covalent bonds; hence, no donor-acceptor interaction energies could be obtained.Together with the short Zn-Ngua bonds, the picture seems mixed: guanidines are strong donors AND highly basic, but here, NBO predicts that amine will be more basic.Hence, in this case, the DFT analysis does not help elucidate the question of the donor strength.Further studies on amine-guanidine complexes are needed in the future.
Polymerization Experiments
Initially, complexes C1-C3 were tested in the activity of the ring-opening polymerization of technical, unsublimed rac-lactide under bulk conditions without any co-initiator at 150 °C (polymerization method a, Scheme 1, Table 4).The use of technical lactide and the high temperatures should reflect the industrial relevant conditions [9].The start of the measurement is when the lactide monomer was melted.With one chlorido complex, two more kinetic measurements at 140 °C under stirring (400 rpm) were carried out (polymerization method b and c) (Table 4).Benzyl alcohol was added to one polymerization to determine if it can open the ring of the lactide monomer and to see the impact on the molar masses (polymerization method c) (Table 4).In recent reports [70] it could be shown that the quality of lactide (technical or recrystallized) has no major impact on the activity of the polymerization.As a result, in polymerization methods b and c different qualities of lactide are used (Table 4).More details for the individual measurements are shown below (Table 4).After each measurement the conversion was determined by 1 H NMR or FT-IR spectroscopy and the molar mass and dispersity D have been measured by GPC (gel permeation chromatography).The reaction constant kapp equals the slope of the linear fit of the semilogarithmic plot of the concentration against time.The technical lactide was stored at −33 °C in a glove box to guarantee the same conditions at every measurement.Every measurement has been performed at least twice.Polymerization method a has been conducted in an oven at 150 • C. First 180-200 mg of finely crushed monomer:catalyst (500:1) were weighed in a 2-mL reaction vessel and sealed under nitrogen.The reaction time was up to 6 h. Figure 3 shows the first-order controlled polymerization catalyzed by the zinc halide complexes C1-C3.All complexes show high activity at k app = (7.3-12.8)× 10 −5 s −1 .Complex C3 with bromide as halide showed the fastest reactivity for the ROP of lactide (k app = 12.8 × 10 −5 s −1 ) (Figure 3 and Table 5).After 2 h a conversion of more than 50% can be reached.It has to be noted that the conversion for C1 and C3 seems very similar after 2 h, which is related to the non-zero intercept of the kinetics for C1.This intercept may be due to initiation by additional water molecules from the technical lactide.The bridging unit in the guanidine group has no influence on the polymerization activity.All in all, all three catalysts polymerize in the similar range of k app .They show similar reaction times.The obtained colorless polymers have a chain length between 5000 and 20,000 g/mol, which is shorter than the calculated molar masses.The use of technical lactide leads to more short chains due to the residual water and other impurities, which leads to chain transfer reactions.Also, intra-and intermolecular transesterification events lead to shorter chains than expected [4].
Complex C3 with bromide as halide showed the fastest reactivity for the ROP of lactide (kapp = 12.8 × 10 −5 s −1 ) (Figure 3, Table 5).After 2 h a conversion of more than 50% can be reached.It has to be noted that the conversion for C1 and C3 seems very similar after 2 h, which is related to the non-zero intercept of the kinetics for C1.This intercept may be due to initiation by additional water molecules from the technical lactide.The bridging unit in the guanidine group has no influence on the polymerization activity.All in all, all three catalysts polymerize in the similar range of kapp.They show similar reaction times.The obtained colorless polymers have a chain length between 5000 and 20,000 g/mol, which is shorter than the calculated molar masses.The use of technical lactide leads to more short chains due to the residual water and other impurities, which leads to chain transfer reactions.Also, intra-and intermolecular transesterification events lead to shorter chains than expected [4].To check the impact of the chirality of the complexes on the tacticity of the polymer, homonuclear decoupled 1 H NMR spectra were measured for the resulting polymers [31,33].The probability value of the heterotactic enchainment Pr is between 0.54 and 0.58 for all complexes, which shows an atactic polymer with a very slight heterotactic bias (Table 5).One reason why the chiral complexes have no influence on the stereocontrol of the polymerization is that the stereo information is not transferred to the polymer due to a lack of steric encumbrance at high temperatures. a] Determined from the slope of the plots of ln(1/(1−C)) versus time; [b] Determined by integration of the methine region of the 1 H NMR spectrum; [c] Determined by gel permeation chromatography (GPC) in THF using a viscosimetry detector; [d] Probability of racemic enchainment calculated by analysis of the homonuclear decoupled 1 H NMR spectra [31,33].
Moreover, with C2 further investigations were performed in an oil bath (polymerization method b) and in a jacketed vessel (polymerization method c).The reason why we have chosen C2 for more M n,exp (g/mol) [c] M n,calcd.
(g/mol)
Đ [a] Determined from the slope of the plots of ln(1/(1 − C)) versus time; [b] Determined by integration of the methine region of the 1 H NMR spectrum; [c] Determined by gel permeation chromatography (GPC) in THF using a viscosimetry detector; [d] Probability of racemic enchainment calculated by analysis of the homonuclear decoupled 1 H NMR spectra [31,33].
To check the impact of the chirality of the complexes on the tacticity of the polymer, homonuclear decoupled 1 H NMR spectra were measured for the resulting polymers [31,33].The probability value of the heterotactic enchainment P r is between 0.54 and 0.58 for all complexes, which shows an atactic polymer with a very slight heterotactic bias (Table 5).One reason why the chiral complexes have no influence on the stereocontrol of the polymerization is that the stereo information is not transferred to the polymer due to a lack of steric encumbrance at high temperatures.
Moreover, with C2 further investigations were performed in an oil bath (polymerization method b) and in a jacketed vessel (polymerization method c).The reason why we have chosen C2 for more investigations is that all catalysts show similar reaction constants in polymerization and a higher yield can be obtained for C2 (which is important for industrial use).The ratio was 1000:1 (polymerization method b) and 1000:1:10 (BzOH) (polymerization method c) to determine the impact of the addition of benzyl alcohol on the polymerization activity.The reaction temperature was 140 • C, stirring speed 400 rpm, and in polymerization method c an IR spectra was recorded every 5 min.The final reaction time in b was 13 h and in c 7 h.In polymerization method b technical lactide was used and in polymerization method c recrystallized lactide was used.In a recently published study it was shown that the quality of recrystallized vs. technical lactide makes no difference for our systems [70].The conversion in polymerization method b was determined via 1 H NMR spectroscopy and in polymerization method c with FT-IR spectroscopy.Here, the area between 1260 cm −1 and 1160 cm −1 shows the C-O-C stretch of the lactide monomer and the C-O-C asymmetric vibrations in the polymer chain of the polylactide (Figure 4) [89].After integration over these two regions, the conversion can be determined.In the recent study we could show that the two different setups (oil bath: conversion determined by 1 H NMR spectroscopy (polymerization method b) or jacketed vessel: conversion determined by FT-IR spectroscopy (polymerization method c) make no difference to the polymerization activity and reaction constant [70].investigations is that all catalysts show similar reaction constants in polymerization and a higher yield can be obtained for C2 (which is important for industrial use).The ratio was 1000:1 (polymerization method b) and 1000:1:10 (BzOH) (polymerization method c) to determine the impact of the addition of benzyl alcohol on the polymerization activity.The reaction temperature was 140 °C, stirring speed 400 rpm, and in polymerization method c an IR spectra was recorded every 5 min.
The final reaction time in b was 13 h and in c 7 h.In polymerization method b technical lactide was used and in polymerization method c recrystallized lactide was used.In a recently published study it was shown that the quality of recrystallized vs. technical lactide makes no difference for our systems [70].The conversion in polymerization method b was determined via 1 H NMR spectroscopy and in polymerization method c with FT-IR spectroscopy.Here, the area between 1260 cm −1 and 1160 cm −1 shows the C-O-C stretch of the lactide monomer and the C-O-C asymmetric vibrations in the polymer chain of the polylactide (Figure 4) [89].After integration over these two regions, the conversion can be determined.In the recent study we could show that the two different setups (oil bath: conversion determined by 1 H NMR spectroscopy (polymerization method b) or jacketed vessel: conversion determined by FT-IR spectroscopy (polymerization method c) make no difference to the polymerization activity and reaction constant [70].The polymerization with benzyl alcohol is more than three times faster than without (Table 6, Figure 5).After 7 h a conversion, determined by 1 H NMR spectroscopy up to 65% in polymerization method c can be obtained, whereas in polymerization method b after 13 h only 66% can be determined.The molar masses were determined after 7 h (polymerization method c, Mn = 8000 g/mol) and are in good accordance with the theoretical ones (9400 g/mol).The dispersity of 1.15 shows excellent control for polymerization method c.Kinetic investigations for polymerization method b show it does not pass through the origin.The reason could be the additional initiating systems (e.g., water and impurities in the lactide monomer), which lead to faster polymerization in the beginning.The molar masses in polymerization method b of 56,000 g/mol are too low (expected Mn = 91,000 g/mol).One reason could be the additional water residues in the technical lactide monomer.Presumably the recrystallized lactide still has some water residues.The Đ value of 1.57 shows good reaction control.
To obtain some insight into the mechanism of the polymerization, end-group analyses were performed with MALDI-ToF measurements.The MALDI-ToF measurements for the polymerization The polymerization with benzyl alcohol is more than three times faster than without (Table 6 and Figure 5).After 7 h a conversion, determined by 1 H NMR spectroscopy up to 65% in polymerization method c can be obtained, whereas in polymerization method b after 13 h only 66% can be determined.The molar masses were determined after 7 h (polymerization method c, M n = 8000 g/mol) and are in good accordance with the theoretical ones (9400 g/mol).The dispersity of 1.15 shows excellent control for polymerization method c.Kinetic investigations for polymerization method b show it does not pass through the origin.The reason could be the additional initiating systems (e.g., water and impurities in the lactide monomer), which lead to faster polymerization in the beginning.The molar masses in polymerization method b of 56,000 g/mol are too low (expected M n = 91,000 g/mol).One reason could be the additional water residues in the technical lactide monomer.Presumably the recrystallized lactide still has some water residues.The Đ value of 1.57 shows good reaction control.
To obtain some insight into the mechanism of the polymerization, end-group analyses were performed with MALDI-ToF measurements.The MALDI-ToF measurements for the polymerization with the co-initiator benzyl alcohol reveal benzyl alcohol at the end of the chain.It can be shown that in polymerization c a low degree of transesterification is obtained.Ethoxy and OH − are the end group of the polymerization without benzyl alcohol.The ethoxy end group stems from the precipitation of the polymer in ethanol and the water residue is due to the technical lactide monomer (see Figures S1 and S2).The exact mechanism of these complex systems is not yet known.The complexes do not act as a single-site catalyst since they cannot open the lactide monomer themselves.As initiating systems, a co-initiator, which can be benzyl alcohol or water, has the task of starting the polymerization catalyzed by the zinc guanidine complexes.In comparison to other zinc N,N-guanidine complexes, these systems show by far the highest activity in polymerization [65,69,70,72].N gua ,O zinc chlorido complexes show the same reaction constants but yield polymers with molar masses, in good accordance with the theoretical ones, which demonstrates excellent reaction control [71].with the co-initiator benzyl alcohol reveal benzyl alcohol at the end of the chain.It can be shown that in polymerization c a low degree of transesterification is obtained.Ethoxy and OH − are the end group of the polymerization without benzyl alcohol.The ethoxy end group stems from the precipitation of the polymer in ethanol and the water residue is due to the technical lactide monomer (see Figures S1 and S2).The exact mechanism of these complex systems is not yet known.The complexes do not act as a single-site catalyst since they cannot open the lactide monomer themselves.As initiating systems, a co-initiator, which can be benzyl alcohol or water, has the task of starting the polymerization catalyzed by the zinc guanidine complexes.In comparison to other zinc N,N-guanidine complexes, these systems show by far the highest activity in polymerization [65,69,70,72].Ngua,O zinc chlorido complexes show the same reaction constants but yield polymers with molar masses, in good accordance with the theoretical ones, which demonstrates excellent reaction control [71]. [a] Determined from the slope of the plots of ln[1/(1−C)] versus time; [b] Determined by 1 H NMR spectroscopy; [c] Determined by gel permeation chromatography (GPC) in THF using a viscosimetry detector.
Materials and Methods
All steps were performed under nitrogen (99.996%) dried with P4O10 granulate using Schlenk techniques.Solvents were purified according to literature procedures and also kept under nitrogen [90].All chemicals were purchased from Sigma-Aldrich GmbH (Taufkirchen, Germany), TCI GmbH (Eschborn, Germany) and ABCR GmbH (Karlsruhe, Germany) and were used as received without further purification.D,L-lactide (Total Corbion) was stored at −33 °C under an inert atmosphere in a glove box.The precursors of the ligands TMGca and DMEGca were synthesized from (1R, 3S)-(+)camphoric acid [75].Ligand L1 was synthesized according to the literature [75].N,N′dimethylethylenechloroformamidinium chloride (DMEG-VS) and N,N,N′,N′tetramethylchloroformamidinium chloride (TMG-VS) were synthesized as described in the literature
Materials and Methods
All steps were performed under nitrogen (99.996%) dried with P 4 O 10 granulate using Schlenk techniques.Solvents were purified according to literature procedures and also kept under nitrogen [90].All chemicals were purchased from Sigma-Aldrich GmbH (Taufkirchen, Germany), TCI GmbH (Eschborn, Germany) and ABCR GmbH (Karlsruhe, Germany) and were used as received without further purification.D,L-lactide (Total Corbion) was stored at −33 • C under an inert atmosphere in a glove box.The precursors of the ligands TMGca and DMEGca were synthesized from (1R, 3S)-(+)-camphoric acid [75].Ligand L1 was synthesized according to the literature [75].N,N -dimethylethylenechloroformamidinium chloride (DMEG-VS) and N,N,N ,N -tetramethylchloroformamidinium chloride (TMG-VS) were synthesized as described in the literature [76,77].
Physical Methods
For L2 mass spectrum was obtained with a ThermoFisher Scientific Finnigan MAT 95 mass spectrometer (Waltham, MA, USA) for HR-EI and for C1-C3 a ThermoFisher Scientific LTQ-Orbitrap XLSpectrometer for HR-ESI.The tube lens voltage lay between 100 and 130 V.
FT-IR spectra were measured with a Thermo Scientific Nicolet Avatar 380 spectrometer as KBr pellets (C1-C3) or as film between NaCl plates (L2).
Gel Permeation Chromatography (GPC): The average molar masses and the mass distributions of the obtained polylactide samples of polymerization a,b were determined by gel permeation chromatography (GPC) in THF as mobile phase at a flow rate of 1 mL/min at 35 • C. The utilized GPCmax VE-2001 from Viscotek (Waghausel-Kirrlach, Germany) is a combination of an HPLC pump, two Malvern Viscotek T columns (porous styrene divinylbenzene copolymer) with a maximum pore size of 500-5000 Å, a refractive indaex detector (VE-3580), and a viscometer (Viscotek 270 Dual Detector).Universal calibration was applied to evaluate the chromatographic results.GPC analysis of polymerization c was carried out on an Agilent 1260 GPC/SEC MDS (Santa Clara, CA, USA) equipped with two PL MixedD 300 × 7.5 mm columns and a guard column, all heated to 35 • C. THF was used as eluent at a flow rate of 1 mL/min.A refractive index (RI) detector was used and this was maintained at 35 • C and referenced to 11 narrow polystyrene standards.
X-ray Analyses
The single crystal diffraction data for C1-C3 are presented in Table 7.These data were collected on a Bruker D8 goniometer with APEX CCD detector.An Incoatec microsource with Mo-Kα radiation (λ = 0.71073 Å) was used and temperature control was achieved with an Oxford Cryostream 700.Crystals were mounted with grease on glass fibers and data were collected at 100 K in ω-scan mode.Data were integrated with SAINT [91] and corrected for absorption by multi-scan methods with SADABS [92].The structure was solved by direct and conventional Fourier methods and all non-hydrogen atoms were refined anisotropically with full-matrix least-squares based on F 2 (XPREP [93], SHELXS [94] and ShelXle [95]).Hydrogen atoms were derived from difference Fourier maps and placed at idealized positions, riding on their parent C atoms, with isotropic displacement parameters Uiso(H) = 1.2Ueq(C) and 1.5Ueq(C methyl).All methyl groups were allowed to rotate but not to tip.Full crystallographic data (excluding structure factors) have been deposited with the Cambridge Crystallographic Data Centre as supplementary no.CCDC-1579451 for C1, CCDC-1579452 for C2 and CCDC-1579453 for C3.Copies of the data can be obtained free of charge on application to CCDC, 12 Union Road, Cambridge CB2 1EZ, UK (fax: (+44)-1223-336-033; e-mail: deposit@ccdc.cam.ac.uk).
Important crystallographic information on C1-C3 is shown in Table 7.
Computational Details
Density functional theory (DFT) calculations were performed with the program suite Gaussian 09 rev.E.01 [96].The starting geometries for all complexes were generated from the molecular structures from the X-ray crystallography data.The Gaussian 09 calculations are performed with the nonlocal hybrid meta GGA TPSSh functional [83][84][85] and with the Ahlrichs type basis set def2-TZVP [82].As solvent model, we used the SMD Model (SMD, acetonitrile) [97] as implemented in Gaussian 09.As empirical dispersion correction, we used the D3 dispersion with Becke-Johnson damping, as implemented in Gaussian, Revision E.01 [84,85].For TPSSh, the values of the original paper have been substituted by the corrected values kindly provided by S. Grimme as private communication [83].NBO calculations were accomplished using the program suite NBO 6.0 [86][87][88].Some of these calculations have been performed within the MoSGrid environment [98][99][100].
Polymerization
All polymerizations were reproduced at least twice.For polymerization a, the reaction vessels were prepared in a glove box and the technical lactide was stored at −33 • C in a glove box.The lactide and the catalyst (500:1) were weighed separately and homogenized in an agate mortar.The reaction mixture was placed in 10 reaction vessels (180-200 mg), tightly sealed and heated at 150 • C outside the glove box.The same applies for polymerization b.The ratio was 1000:1.In seven Young flasks 0.5 g of the mixture was added.The tubes, containing a stirring bar (stirring speed = 400 rpm), were heated in an oil bath to 140 • C. The polymerization started with the melting point.After stopping the reaction under cool water the conversion was determined by dissolving the sample in 1-2 mL DCM and a 1 H NMR spectrum was taken.The PLA was precipitated in ethanol (150 mL) and dried at 50 • C.
Polymerization kinetics followed by FT-IR spectroscopy (polymerization c) were measured with a Bruker Matrix-MF FT-IR spectrometer equipped with a diamond ATR probe (IN350 T) suitable for Mid-IR in situ reaction monitoring.The kinetics were carried out in a jacketed vessel under Argon counterflow and stirring conditions (mechanical stirring system 400 rpm), and the vessel was connected to a Huber Petite Fleur-NR circulation thermostat to keep the temperature constant (adjusted temperature: 150 • C; reaction temperature: 140 • C).After heating the recrystallized rac-lactide (0.14 mol) to 140 • C, a background spectrum was recorded before the IR probe was placed in the lactide melt.Subsequently the catalyst (1.39 × 10 −4 mol) and the benzyl alcohol (1.39 × 10 −3 mol) were added under argon to the lactide melt.A measurement was recorded every five minutes to see the decrease of the C-O-C lactide monomer peak against the increase of the C-O-C asymmetric vibrations of the polylactide in the IR spectrum [89].The evaluating software was OPUS (Bruker Optik GmbH 2014).
General Synthesis of Bisguanidine Ligands with Chloroformamidinium Chlorides
The corresponding primary diamine (4.26 g, 30 mmol) and triethylamine (3.03 g, 30 mmol) were dissolved in acetonitrile (60 mL).After the addition of a solution of the Vilsmeier salt (30 mmol) in acetonitrile (60 mL) dropwise at 0 • C, the reaction mixture was stirred at reflux overnight.The reaction was cooled down and NaOH (1.20 g, 30 mmol) was added.The solvent and NEt 3 were evaporated under a vacuum.To complete the deprotonation of the guanidine unit, KOH (20 mL, 50 wt %) was added and the guanidine ligand was extracted with acetonitrile (3 × 50 mL).The collected organic layers were dried with Na 2 SO 4 and activated carbon and the solvent was evaporated under reduced pressure [76,77].Ligand L1 was synthesized according to the literature [75].Ligand L2 is shown in Scheme 2. vibrations of the polylactide in the IR spectrum [89].The evaluating software was OPUS (Bruker Optik GmbH 2014).
General Synthesis of Bisguanidine Ligands with Chloroformamidinium Chlorides
The corresponding primary diamine (4.26 g, 30 mmol) and triethylamine (3.03 g, 30 mmol) were dissolved in acetonitrile (60 mL).After the addition of a solution of the Vilsmeier salt (30 mmol) in acetonitrile (60 mL) dropwise at 0 °C, the reaction mixture was stirred at reflux overnight.The reaction was cooled down and NaOH (1.20 g, 30 mmol) was added.The solvent and NEt3 were evaporated under a vacuum.To complete the deprotonation of the guanidine unit, KOH (20 mL, 50 wt %) was added and the guanidine ligand was extracted with acetonitrile (3 × 50 mL).The collected organic layers were dried with Na2SO4 and activated carbon and the solvent was evaporated under reduced pressure [76,77].Ligand L1 was synthesized according to the literature [75].Ligand L2 is shown in Scheme 2.
General Synthesis of Zinc Halide Complexes with Guanidine Ligands
A solution of zinc halide (1 mmol) was dissolved in 2 mL dry THF and a solution of dissolved ligand (1.2 mmol in 2 mL THF) was added to the metal salt solution.Crystals could be obtained by slowly diffusion of diethyl ether.Complex C1 is shown in Scheme 3, C2 in Scheme 4 and C3 in Scheme 5.
General Synthesis of Zinc Halide Complexes with Guanidine Ligands
A solution of zinc halide (1 mmol) was dissolved in 2 mL dry THF and a solution of dissolved ligand (1.2 mmol in 2 mL THF) was added to the metal salt solution.Crystals could be obtained by slowly diffusion of diethyl ether.Complex C1 is shown in Scheme 3, C2 in Scheme 4 and C3 in Scheme 5.
Figure 2 .
Figure 2. Molecular structures in the solid state of C1-C3.H atoms are omitted for clarity.
Figure 2 .
Figure 2. Molecular structures in the solid state of C1-C3.H atoms are omitted for clarity.Figure 2. Molecular structures in the solid state of C1-C3.H atoms are omitted for clarity.
Figure 2 .
Figure 2. Molecular structures in the solid state of C1-C3.H atoms are omitted for clarity.Figure 2. Molecular structures in the solid state of C1-C3.H atoms are omitted for clarity.
Table 4 .
Polymerization conditions for polymerization methods a-c.Polymerization method a has been conducted in an oven at 150 °C.First 180-200 mg of finely crushed monomer:catalyst (500:1) were weighed in a 2-mL reaction vessel and sealed under nitrogen.
Table 4 .
Polymerization conditions for polymerization methods a-c.
Figure 4 .
Figure 4. 2D time-resolved FT-IR spectra of the polymerization method c rac-lactide with C2.
Figure 4 .
Figure 4. 2D time-resolved FT-IR spectra of the polymerization method c rac-lactide with C2.
Table 3 .
Natural charge on zinc, N gua , N amine , and halide atoms for complexes C1-C3.
Table 5 .
Polymerization data from polymerization method a for complexes C1-C3 after two hours.
Table 6 .
Reaction constants for varying polymerization conditions with C2. | 8,740.2 | 2017-11-30T00:00:00.000 | [
"Chemistry"
] |
Insights into secular trends of respiratory tuberculosis: The 20th century Maltese experience
Over half a century ago, McKeown and colleagues proposed that economics was a major contributor to the decline of infectious diseases, including respiratory tuberculosis, during the 19th and 20th centuries. Since then, there is no consensus among researchers as to the factors responsible for the mortality decline. Using the case study of the islands of Malta and Gozo, we examine the relationship of economics, in particular, the cost of living (Fisher index) and its relationship to the secular trends of tuberculosis mortality. Notwithstanding the criticism that has been directed at McKeown, we present results that improvement in economics is the most parsimonious explanation for the decline of tuberculosis mortality. We reaffirmed that the reproductively aged individuals were most at risk of dying of tuberculosis, seeing that 70 to 90% of all deaths due to tuberculosis occurred between the ages of 15 and 45. There was a clear sex differential in deaths in that, prior to 1930, rates in females were generally higher than males. During times of extreme hardship, the sex differential was exacerbated. Over the course of World War I, the sex gap in tuberculosis rates increased until peaking in 1918 when there was also the influenza pandemic. The heightened differential was most likely a result of gendered roles as opposed to biological differences since female tuberculosis rates again surpassed male rates in 1945 during World War II. Respiratory tuberculosis in both urban and rural settlements (in Malta proper) was significantly influenced by the Fisher index, which explains approximately 61% of the variation in TB death rates (R = 0.78; p<0.0001). In Gozo, there was no significant impact on respiratory tuberculosis (R = 0.23; p = 0.25), most likely a consequence of the island’s isolation, a self-sufficient economy and limited exposure to tuberculosis.
Introduction
As a re-emerging infectious disease of considerable importance, respiratory tuberculosis is one of the top ten causes of death worldwide [1] and until recently was the single main cause of death worldwide in young adults [2][3][4][5]. The history of tuberculosis in humans, contrary to popular belief, predates the Neolithic period and domestication [6,7]. By at least 50 A.D., a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 tuberculosis decimated populations in Europe [8], killing, for the most part, those in their reproductive prime. Beginning around eighteen hundred, however, there was a gradual and progressive decline in respiratory tuberculosis death rates [9][10][11]. As the secular trend in decreasing rates preceded knowledge of the etiology of tuberculosis and use of effective antituberculosis drugs, reasons for the decline are the subject of considerable debate and remain elusive. Leading explanations include a number of factors. Better living conditions resulted in an improved diet and, possibly, a favourable trend in the relationship between the bacilli and the human host [10,12]. Government initiatives aimed at public health measures were able to reduce "urban congestion" [13]. The average number of "effectively contacted" by an infectious case decreased for the following reasons: the decline in household crowding; improved ventilation in buildings; reduction in the proportion of the elderly, who are sources of tuberculous infection, residing at home; and the segregation of the infected to workhouses or sanatoria [11] and possibly, due in part to the action of natural selection [14,15].
The global decline in tuberculosis deaths was interrupted twice-during World War I and to a lesser extent during World War II-and then resumed in post-war years [16]. Undoubtedly, wartime conditions played a huge role in promoting infections of tuberculosis by hastening the conversion of latent tuberculosis to active tuberculosis cases [9]. During World War I, rates of respiratory tuberculosis reached unprecedented levels when even nonbelligerent countries experienced strikingly high tuberculosis death rates. In some of the countries at war such as those in the entire British Empire, more people died of tuberculosis in the single year of 1917 (over 1 million) than soldiers on the battlefield during the course of the four plus years of war [17]. Nevertheless, war conditions alone could not have accounted for the sheer magnitude of tuberculosis death rates. According to one theory, a novel strain of M. tuberculosis, the Beijing lineage, resurfaced during World War I-a strain with a quicker transition from infection to disease, increased virulence, and transmissibility [18]-and would have been responsible for the increased mortality. Following the war, there was a sudden return to the declining trend of pre-war tuberculosis rates suggesting the strain alone could not have been the main contributor to the elevated rates: the bacilli would need to have disappeared abruptly from the population after the end of the war. Additional theories about contributing factors to the increase of rates during the First World War are as follows: overcrowding, which would have been more noticeable in war countries where women, children, and refugees were displaced from war torn areas; increase in malnutrition, although there is evidence allied countries did not experience food shortages to the same extent as German occupied nations; shortage of medical care [9,16,19]; and the selective effect of influenza on the tuberculous during the 1918 pandemic [20]. The influenza and tuberculosis relationship can account only for the increase in 1918/19 and not in the preceding war years. Together, the events of World War I and pandemic influenza constitute a singular event that triggered population shock and, in turn, its effect on tuberculosis mortality was unprecedented. In England and Wales, for example, tuberculosis deaths in 1913 was 135 per 100 000, rose to 162 in 1917, and then fell to 113 per 100 000 in 1920 [17]. We contend that such rare and fortuitous events can be a useful tool to gain insight into the relative importance of specific factors that normally lie hidden in diseases of complex etiology.
In this paper, we explore reasons for the decline of tuberculosis death rates in the civilian population of Malta during the early 20 th century. We also address the question of why there were deviations from the normative trend in 1917 and 1918 when rates exceeded background mortality. We examine overall and regional trends in tuberculosis by sex, age, and sex-age specific rates as well as changes over time and during the crisis period of World War I.
The study population: Background on Malta
The Malta archipelago consists of three major islands: Malta, Gozo, and Comino. Because Comino is sparsely populated, our study does not include this island. Historically, on the larger island of Malta, there existed significant regional differences in terms of landscape: an urban and rural dichotomy. With an area of approximately sixty-seven square kilometres and lying five kilometres away from the main island, Gozo is about one quarter the size of Malta. Malta represents an ideal location to study secular trends in tuberculosis as it minimizes a number of confounding factors responsible for elevated rates of air-borne infections or obscuring real trends in tuberculosis mortality. First, prior to World War II, Malta was a geographically isolated population with little immigration. Consequently, the effects of immigration from high prevalence countries which could have altered the rates of tuberculosis over time can be viewed as minimal [21]. Second, there was little large scale industry [22, 23] to expose major segments of the population to airborne pollutions. Industrial emissions could have compromised the overall health of the population and exacerbated infections such as respiratory tuberculosis. Finally, the potential impact of exposure to M. bovis prior to the pasteurization of milk in 1950 and its subsequent impact on tuberculosis and reactivation [24], can be considered negligible because Malta was completely reliant on goat's milk rather than cow's milk prior to World War II (see [25]).
Aside from the features of cultural and religious homogeneity, there is an important element of diversity in occupation and economics that allows for a better understanding of changes in tuberculosis mortality. The island of Malta's topography, with its marginal Xaghri (karstik land) and overall scarcity of productive land, meant that the Maltese were largely dependent on food imports for their nutritional needs [26]. There were, however, marked differences within the Maltese landscape for food self-sufficiency. Inhabitants of Gozo and rural Malta farmed their land, thus these locales were less reliant on imported food products. By examining tuberculosis in the three regional settings (Gozo, Malta suburban/urban, and Malta rural), we may gain a deeper understanding of nutrition and its role in tuberculosis deaths. In other words, the Maltese setting can be viewed as a continuum of populations residing in areas of high dependency to low dependency on food imports.
Gozo's distinct cultural ethos and biological heritage is grounded in geographical isolation. Gozitans see themselves 'apart' from the Maltese for several reasons: they feel they have been marginalized by bureaucratic neglect, overlooked in regard to fundamental economic opportunities and denied adequate health and social infrastructure [27]. Because the island was so small and its people showed a long-standing preference for spatial endogamy, it comes as no surprise that, prior to World War II, there were scarcely any differences to be found between the life and customs in the town and in the villages [28].
Accordingly, we shall take advantage of conditions equivalent to a "natural experiment" where the impact of a moment of crisis associated with World War II and the 1918 influenza pandemic combined with an unusual population setting can be studied for its effect on respiratory tuberculosis mortality. These conditions include the following: a high impact stressor that targeted a significant portion of the population and very likely had a negative and discernible impact on health because of its known biological and environmental etiology; a stressor that occurred over a clearly defined period of time and one of sufficient temporal scope to yield measurable results. The second factor, the population, was broadly biologically and culturally homogenous and yet was diverse enough evaluating the importance of potential risk factors on vulnerable segments of the community.
Methods and materials
To assess the overall health status of males and females in the Maltese islands, we used life table analysis. We employed life expectancy estimates at birth and at age twenty-five for the years 1911 to 1924. Life expectancy estimates at the age of twenty-five provides a proxy measure of health of the reproductively aged individuals, the age category most at risk of pulmonary tuberculosis, and they provide a more accurate indicator of adult health. Traditionally, the results from life tables have been used to assess the health of large populations. Recent studies, however, have based their studies of smaller populations on the life table methodology (see [29,30]).
Using the Smith Survival Program (Version 9.2) [31], we applied the Chiang period approach [32] for estimating various life table parameters. The benefits of employing the Chiang method are these: (1) it produces the most conservative estimates for comparison between local areas; (2) it is easy to calculate (including statistical variance); (3) it allows for sensitivity analysis to perform on the major assumptions; and (4) it is widely used in research allowing for comparability of results to other populations [29].
To capture the nature of the change in tuberculosis mortality by age and sex, we con- Annual deaths numbers by sex and age for all causes, tuberculosis and influenza were drawn from the Health Reports for the Maltese islands and from the Maltese Gazette for monthly causes of death by sex and age (1939)(1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948)(1949)(1950)(1951)(1952). Published under the auspices of the Medical Officer of Health, the reports also yielded information on health related matters about infants, maternity sanitation, housing, food quality, water as well as detailed accounts and observations of morbidity of notifiable diseases over the course of a year.
To reconstruct the population at risk for the population and its respective regions, we took advantage of published census reports from 1911, 1921, 1931, and 1948. Annual sex and age specific population at risk taken at four-year intervals beginning with one year olds was interpolated by using age-specific multipliers applied to the age and sex specific population sizes from the census of the closest year. The total population, we estimated, fell within a margin of less than 0.5% of the total population as cited in the annual Health Reports. Yearly birth rates, retrieved and compiled from the monthly Malta Gazette death records where live birth information was recorded, gave the approximate number of infants under one year of age. This approach assumed that stability in the proportions of individuals at risk for each age band over the decennial period. Using the aforementioned data sources, we compiled the overall as well as the sex and age specific (15 to 44 years) annual tuberculosis mortality rates for Malta from 1911 to 1949.
We recognize potential limitations in historical studies of tuberculosis. Since tuberculosis was often incorrectly identified as the cause of death [33], it is inherently possible that we overestimated our rates of tuberculosis mortality. At the same time, there was no sure method of confirming cases and causes of death even by World War I when portable x-rays were used, but not readily available, so that many were diagnosed without the confirmation of x-rays [9]. For this latter reason, some have argued that the improvement in diagnosis over time has been partly responsible for the decline in tuberculosis deaths [33].
Using census returns and their protocol for defining the residential settlement type, we divided Malta's settlements into three distinct settings: (1) urban and suburban setting, (2) rural setting, and (3) Gozo. We used Malta's administrative definition of the respective regions as specified in the Census returns of 1911 and 1921, and the Annual Health Reports.
Because of the varying degrees of dependency on the importation of foodstuff across the Maltese islands, we used food imports as a proxy measure of the cost of living, which can be examined for its effectiveness as an explanatory variable for the rates of respiratory tuberculosis mortality over time and within the three regions. Data on economic parameters used to construct the price and quantity index, the chained linked Fisher index, for 1910 to 1938 (years 1916 and 1920 were excluded from the study), was sourced from the importation of goods information found in the Malta Blue books. J. Falzon generously supplied the values for the Fisher index as presented in Falzon and Lanzon, 2011 [34]. We chose it for its superiority as a price and quantity index because it satisfies most of the desirable properties of an index (see the 10 properties as outlined in [34] and as originally presented by Diewert, 1987 [35]). The Fisher index is a measure of inflation based on house consumption and unit price values, and is the square root of the Laspeyres index multiplied by the Paasche index. We analyzed the relationship of economics to tuberculosis death rates (overall and regional) using least square regression and Pearson's correlation analysis. Data on tuberculosis deaths, by sex and age, and population at risk are included in the supporting information (See S1 Dataset).
Background health
Fig 1 shows that during the thirteen year study period life expectancy at birth did not exceed 44 years, however life expectancy at birth was highly variable for one year to another as life at birth in Malta was tenuous. Diarrheal deaths accounted for high infant mortality rates, and harsh climatic conditions such as an unseasonably dry hot summer would have exacerbated infant deaths [36].
As shown in Probability of dying from respiratory tuberculosis Figs 4 and 5 show the distinctive pattern of the rise in tuberculosis mortality for both 1917 and 1918 relative to the period before and after the dramatic rise in tuberculosis mortality. For males, the increase was concentrated in the age bands 25 to 34 years and 35 to 44 years. It is also noteworthy that probability of dying from respiratory tuberculosis after 1918 was very similar to the pattern before 1917. The female pattern of tuberculosis mortality was different in 1917 and 1918 from that of males in terms of a broader base of heightened mortality, as well as a larger peak in mortality. Following 1918, respiratory tuberculosis mortality retained its distinctive peak at age band 25 to 34 and remained higher than the period preceding 1917.
Respiratory tuberculosis rates over time
As is shown in Fig 6, the general trend for tuberculosis deaths rates from the early to mid-1900s in the Maltese islands was a gradual decline. During World Wars I and II, the secular Tuberculosis trends in 20th century Malta trend was interrupted, and rates rose to exceptional levels, 1.36 deaths per 1000 individuals in 1918, and to a lesser magnitude of 0.84 deaths per 1000 individuals in 1942. Similar findings of exceptionally high mortality rates during the world wars have been observed elsewhere (see [9,17,37]). The markedly high tuberculosis rates during 1917 and 1918 are attributed by most scholars to the confluence of the war and the influenza epidemic. In comparison to England, overall tuberculosis death rates were lower but similar in pattern, with the exception of World War II when rates in Malta greatly surpassed England (see Fig 6).
The mortality rates in the reproductively aged adults provide a more refined exploration of the chronological trend in tuberculosis mortality rates since, in any given year, 70 to 90% of all deaths due to tuberculosis occurred between the ages of 15 and 45 years of age. Fig 7 shows that the general trend of a reduction in tuberculosis rate remains the same, but the absolute rates during the war were more pronounced: 2.32 deaths per 1000 individuals and 1.36 per 1000 individuals in 1918 and 1942 respectively. There was a decline in rates beginning in 1920, reaching the nadir in 1922 and rebounding to almost pre-war levels in 1923. Fig 8 shows that the tuberculosis death rates for males and females; rates in females were generally higher than male rates. We observed a trend in reproductively aged females similar to the overall tuberculosis death rates: there was spike of tuberculosis during the war that peaked in 1918, followed by a rapid decline thereafter, with an increase close to pre-war levels.
On the other hand, the trend in male tuberculosis rates during the war and post-war period differed from that of the females. Male tuberculosis rates dipped only slightly in 1919 when the third wave of the influenza epidemic was occurring, and then returned to near pre-war levels in 1920. It follows that reproductively aged females rates are driving overall rates regardless of age. During World War I, the sex difference in tuberculosis death rates peaked (see Fig 9). The heightened differential is most likely a result of gendered roles as opposed to biological differences since female tuberculosis rates again surpassed male rates from 1924 to 1928, and in 1943 during World War II. rates follow similar trajectories inasmuch as these settlements showed a sharp rise in rates beginning in 1917, peaking in 1918 and returning to pre-1917 rates after the 1920s. Unlike the other two settlement types, Gozo's tuberculosis death rates did not peak during World War I and had markedly lower tuberculosis rates relative to Malta until the mid-1920s.
Respiratory tuberculosis and economics
The regression and correlation results shown in Table 1 indicate that respiratory tuberculosis in both urban and rural settlements (in Malta proper) were significantly influenced by the price and inflation of imported food (p<0.0001). For both urban and rural settlements, just over 60% of the variation in tuberculosis death rates can be explained by the cost of living. In Gozo, there was no significant impact on respiratory tuberculosis even as the cost of living rose during the years 1917 to 1919. An examination of overall tuberculosis rates for reproductively aged individuals before and during the war shows a very strong relationship between high tuberculosis death rates and the poor economy. Obviously, the war was a driving force in lower economic levels, and economics explains the general downward trend in tuberculosis rates over the study period.
Discussion
This paper examined the complexity within the secular trend in respiratory tuberculosis in the Maltese islands, noting both the slow progressive decline since the early 20 th century as well the sharp increase observed in 1917 and 1918. Undoubtedly, there is considerable complexity at the root of the "causes" of the observed pattern. Nevertheless, we argue that the most parsimonious explanation for the decline and temporary increase can be attributed to changes in the economy and its trickle-down effects. While this explanation was originally proposed by McKeown and colleagues [10] nearly half a century ago, the Malta case study provides evidence of the relationship between a nation state's economy and tuberculosis rates. Malnutrition is a direct proximal cause for tuberculosis susceptibility and death resulting from the increase in cost of living [12,38]. Most likely, the lack of meat consumption deprived individuals of dietary B3 and tryptophan and was responsible for increase of tuberculosis infections [39].
Further confirmation of the role of the standard of living and its impact on health can be seen in the results of our regional analysis. Our results fall in line with those of Spain where there is evidence that urban centers with a large concentration of the working poor and Tuberculosis trends in 20th century Malta abysmal living conditions provided ideal environments for epidemics of respiratory infections and, in particular, respiratory tuberculosis. The "urban penalty" suggests that urban locales such as towns and cities "concentrate poor people and expose them to unhealthy physical and social environments" [40] (pg. 1). Similarly, there were lower rates of tuberculosis deaths in rural areas than urban rates [41]. It has been postulated that rurality protected individuals from mortality during the influenza pandemic in New Zealand because social distancing (lower person-person contact) and remoteness lowered transmission of infection from urban to rural areas [42]. Our findings on Gozo's low tuberculosis mortality, even in the face of disruption related to World War I, point to the importance of lower exposure owing to a number of conditions: isolation and healthy outdoor agrarian life style; economic self-sufficiency (in addition to being farmers and fishermen, the other chief occupations for men were furniture craftsmen and masons); and relatively low reliance on imported food compared to Malta [23].
As stated earlier, a number of reasons can account for the increase of tuberculosis rates during World War I, but our primary explanation lies in an economic upheaval. As a British colony situated in the heart of the Mediterranean, the largest island of Malta was a strategic stronghold for the British military. During World War I, Malta, a supply station for the British military, was heavily involved in caring for the sick and injured and became known as "the Nurse of the Mediterranean." Not surprisingly then, the economy of Malta during the course of the war was dependent on providing services for the Royal Navy [43]. As the war progressed, there was a downturn in the war related activities, which culminated in a large number of unemployed men and women. While the war effort initially brought economic prosperity to the Maltese, this prosperity was short lived. By 1916, the cost of the living by doubled; unemployment rates, employment instability, and food shortages rose [44,45]. In addition, the cost of food items rose; they were inferior in quality and difficult to purchase, even at inflated prices [46]. For example, from 1914 to 1918, high protein food items (fish, meat, cheese, eggs) increased from 200% to 500% [47]. An important food staple was bread as it was the major source of energy especially for the poor. During World War I, the price of wheat increased and the availability of flour decreased significantly, to the point where the price of bread trebled by the end of the war. The shortage of bread was so acute that it contributed to social unrest and a strike at the Malta Dockyard in May of 1917 [44,45].
Other evidence of the dire state of the economy was the lack of available and new housing, a shortage that certainly would have contributed to overcrowding as large family size in Malta continued unabated. We argue that household security was compromised, not only because of growing rates of unemployment and unstable employment, but also because construction of new housing dwindled during the war years exacerbating the existing state of overcrowding. Stagnation of housing was evident in 1915 when the number of houses built declined from 152 in Malta and 25 in Gozo, to 17 and 5 by 1920-21. This was precisely the period when respiratory rates rose. Only in 1922 was there an improvement in living conditions when a total of 5311 houses in Malta and 370 in Gozo were built from 1922 to 1933 [48]. Not only did overcrowding, unemployment and high prices of food and other necessities fuel a compromise to general health, but the absence of public welfare for the needy further affected the well-being of the Maltese working poor [43].
Influenza and tuberculosis
Our emphasis on the war and economy stands in opposition to the work of Noymer [49] who singles out the influenza pandemic as a defining moment in the history of tuberculosis and precipitating its decline, especially in males (see also Noymer and Garenne [50]). Furthermore, Noymer [20] postulates that there was a selection effect, specifically passive selective, which resulted in increased tuberculosis mortality during the 1918 pandemic because of the "agemortality overlap" with influenza (p. 1601). Because of the exceptionally high rates of tuberculosis during World War I, there was a rapid two year decline of tuberculosis death rates after 1918 in the USA [20]. Undoubtedly, the influenza pandemic played a contributing role to the increase in tuberculosis deaths in our study period, but it cannot be the primary factor simply because the rise began a year before the start of the pandemic in September, 1918. Collectively, the augmented tuberculosis rates in 1917 and 1918/1919 in Malta resulted in a rapid decline of tuberculosis rates post-1919 and returned to almost pre-war levels in the overall rates and for females. The decrease in death rates following a stressor period is known as the "harvesting effect" or "short-term mortality displacement." This phenomenon occurs when there is a heightened, albeit temporary, mortality rate among those with health complications because of underlying health problems (especially cardio-respiratory diseases) and among the elderly [51-52] or because of increased vulnerability associated with lower socio-economic status. Following the spike in deaths, there is a drop in the death rate, the aftermath of the harvesting of the frail segment of a population [53]. The war together with influenza cases accelerated deaths in the tuberculous, who might have otherwise had many more years to live and, very probably, hastened the transition from latent state tuberculosis to full blown tuberculosis since almost all young adults were exposed to the bacillus during this time [9].
The lack of the harvesting effect in males is most likely a result of lower levels of tuberculosis rates. In addition, we must remember that identifying specific events associated with harvesting is fraught with many complications: there is no established method for assessing the harvesting effect. First, the stressor or event cannot be constant in the population, as in the occurrence of extremely hot weather, drought, or pollution [54]. Second, there is no specific time frame defining when the "dip" or healthy period begins and how long it should last [55]. Obviously, the scale of the stressor will determine the length of the healthy period, be it days, weeks, months or years. It is incumbent on the researcher to clarify the extent of the stressor or event and the expected limits of the health period. Fourth, we emphasize one additional requisite condition that determines whether the harvesting effect is operating: there must be an eventual return to background levels; that is, rates should return to pre-stressor levels. Otherwise, harvesting will persist indefinitely or fail to exist. One such example of the misuse of harvesting in this context is the study by Oei and Nishiura (2012) [56] who stated that harvesting of tuberculosis occurred following the influenza epidemic in Japan and the Netherlands. However, because tuberculosis has been declining over time, the return to "normalcy" did not obtain the same level as pre-war rates. Lastly, it is paramount to recognize that harvesting is population specific: it is contingent on the population at risk, age distribution, and causes of death [57]. Hudelson [58] suggests that there are a number of factors that have potential implications for gender differentials in tuberculosis morbidity and mortality. Of these, two factors merit consideration for our study. Both center around gendered vulnerabilities: (1) differential exposure to the tuberculosis bacilli; and (2) general health/nutritional status of TB-infected persons.
Tuberculosis and gender differences
Our research supports the earlier suggestion that gendered vulnerabilities lie at the root of heightened female respiratory mortality rates that began in 1917. We posit that, as the primary caregivers for the sick within the traditional patriarchal large extended family unit, Maltese women were uniquely placed to be exposed to the bacilli during periods of deprivation and instability. Owing to their gendered role as caregivers, homemakers and to the burden of domestic duties, women underwent markedly higher stress levels as they tried to maintain daily essential elements of household security [59]. Crowded into unsanitary and poor ventilated living quarters, large families provided ideal conditions for spread of infectious diseases spread through continuous contact and close physical and social proximity to one another. Furthermore, the selflessness of the women placed family and husband first, especially when there were food shortages and/or a lack of quality foodstuffs.
Our results on sex differences in tuberculosis rates agree with the findings of Cobbett [37], who, nearly a century ago, reported that females, rather than males, were most affected during the war. From the vantage point of working shortly after World War I, Cobbett concluded that it was not an increase in new cases, but those who were already affected that succumbed to tuberculosis. The rise in mortality came about, he suggested, because nutrition was seriously impaired by the war. In Malta during World War I, the sex differential in adult (aged 15-44years) tuberculosis rates peaked; the heightened differential was a result of increased stressors placed on women to maintain a household when resources were scarce. Obviously, increased tuberculosis rates during the World War I placed a burden on the reproductive fitness of women. During World War II, women bore the brunt of many pressures, and the importance of gendered roles was again thrown into prominence. We cannot ignore that periodically male tuberculosis mortality rates surpassed those of females. Explanations for the change in the sex differential in tuberculosis, which resulted from the transient increase in male rates and the continuation in the decline in female rates, will be explored in future studies.
Conclusion
Today, tuberculosis along with HIV is one of the leading causes of death [60]. The drain of tuberculosis morbidity and mortality on public health is apparent in both developing and developed nations alike. In developing nations, tuberculosis account for about 26% of avoidable deaths [61] and, in all parts of the world, it is a remerging opportunistic disease in those with HIV and other vulnerable groups. Understanding trends in tuberculosis mortality in past contexts during periods of stability and moments of heightened stress offers insight into disease management when future epidemics occur.
We have reaffirmed that economics, the cost of living in particular, was a major factor in determining tuberculosis mortality rates in the 20 th century. Furthermore, we observed the harvesting of deaths of the tuberculous was observed during times of economic strain. The importance of the heterogeneity of regional rates of tuberculosis because of variation in economic dependency within a nation state was demonstrated: Gozo's experience with tuberculosis was muted primarily because of isolation and a self-sufficient economy. | 7,407.4 | 2017-08-17T00:00:00.000 | [
"Medicine",
"Economics"
] |
Crystal structures of {μ2-N,N′-bis[(pyridin-3-yl)methyl]ethanediamide}tetrakis(dimethylcarbamodithioato)dizinc(II) dimethylformamide disolvate and {μ2-N,N′-bis[(pyridin-3-yl)methyl]ethanediamide}tetrakis(di-n-propylcarbamodithioato)dizinc(II)
The ZnII atom is each of the title compounds is coordinated by two dithiocarbamate ligands and a pyridyl-N atom. The resultant NS4 donor set approximates a square-pyramid and trigonal-bipyramid, for the solvated and unsolvated structures, respectively. In the solvate, amide-N—H⋯O(dimethylformamide) hydrogen-bonds define a three-molecule aggregate while in the unsolvated structure, amide⋯amide hydrogen-bonding leads to a supramolecular chain.
The title structures, [Zn 2 (C 3 H 6 NS 2 ) 4 (C 14 H 14 N 4 O 2 )]Á2C 3 H 7 NO (I) and [Zn 2 (C 7 H 14 NS 2 ) 4 (C 14 H 14 N 4 O 2 )] (II), each feature a bidentate, bridging bipyridyl-type ligand encompassing a di-amide group. In (I), the binuclear compound is disposed about a centre of inversion, leading to an open conformation, while in (II), the complete molecule is completed by the application of a twofold axis of symmetry so that the bridging ligand has a Ushape. In each of (I) and (II), the dithiocarbamate ligands are chelating with varying degrees of symmetry, so the zinc atom is within an NS 4 set approximating a square-pyramid for (I) and a trigonal-bipyramid for (II). The solvent dimethylformaide (DMF) molecules in (I) connect to the bridging ligand via amide-N-HÁ Á ÁO(DMF) and various amide-, DMF-C-HÁ Á ÁO(amide, DMF) interactions. The resultant three-molecule aggregates assemble into a three-dimensional architecture via C-HÁ Á Á(pyridyl, chelate ring) interactions. In (II), undulating tapes sustained by amide-N-HÁ Á ÁO(amide) hydrogen bonding lead to linear supramolecular chains with alternating molecules lying to either side of the tape; no further directional interactions are noted in the crystal.
Chemical context
The potential of self-association between amide functionalities via amide-N-HÁ Á ÁO(amide) hydrogen-bonding has long been recognized (MacDonald & Whitesides, 1994). In this way, eight-membered {Á Á ÁHNCO} 2 synthons can be formed. Alternatively, extended aggregation patterns based on a single point of contact repeat associations leading to supramolecular chains or double-connections (edge-shared) leading to tapes. In this connection, isomeric di-amide structures of the general formula (n-NC 5 H 4 )CH 2 N(H)C( O)-C( O)N(H)CH 2 -(C 5 H 4 N-n), for n = 2, 3 and 4, hereafter abbreviated as n LH 2 , have long attracted interest for their potential to form supramolecular tapes. For example, as realized in the twodimensional structure formed in the 1:1 co-crystal of 4 LH 2 and the conformer, bi-functional 1,4-di-iodobuta-1,3-diyne (Goroff et al., 2005). Here, the amide tapes are orthogonal to the NÁ Á ÁI halogen bonding. In the realm of metal-containing species, a three-dimensional architecture can be assembled in ISSN 2056-9890 the crystal of {[Ag( 3 LH 2 ) 2 ]BF 4 } n by a combination of Ag N bonds for the tetrahedral silver(I) atom, provided by bidentate bridging ligands, where the latter are also connected via concatenated {Á Á ÁHNC 2 O} 2 synthons (Schauer et al., 1997).
Structural commentary
The molecular structure of the centrosymmetric, binuclear zinc(II) compound in (I) is shown in Fig. 1a and selected geometric parameters are collected in Table 1. The zinc centre is coordinated by two chelating dithiocarbamate ligands and the coordination geometry is completed by a pyridyl-N atom. The dithiocarbamate ligands coordinate differently, with the S1-ligand coordinating almost symmetrically with Á(Zn-S) = (Zn-S long À Zn-S short ) = 0.10 A. By contrast, the S3-ligand coordinates slightly more asymmetrically with Á(Zn-S) = 0.18 Å . These differences are not reflected in the associated C-S bond lengths, which span an experimentally equivalent range of 1.720 (2) to 1.732 (2) Å . The resulting NS 4 donor set defines a distorted square-pyramidal geometry as judged by the value of = 0.18 which compares to = 0.0 for an ideal square-pyramid and 1.0 for an ideal trigonal-bipyramidal geometry (Addison et al., 1984). In this description, the zinc atom lies 0.5011 (3) Å above the plane defined by the four sulfur atoms [r.m.s. deviation = 0.0976 Å with the range of deviations being À0.0990 (3) Å for the S3 atom to 0.0987 (3) Å for S2]. The widest angles are defined by the sulfur atoms forming the shorter of the Zn-S bonds of each dithiocarbamate ligand and by those forming the longer Zn-S bonds. The dihedral angle between the best plane through the four sulfur atoms and that through the pyridyl ring is 87.13 (4) , indicating a near perpendicular relationship. The dihedral angle between the two chelate rings is 27.46 (6) .
The molecular structure of the binuclear zinc(II) compound, (II), is shown in Fig. 1b and again selected geometric parameters are collected in Table 1. The first and most obvious distinction between the binuclear compounds in (I) and (II) relates to the symmetry within the molecules, i.e. the bridging ligand is disposed about a centre of inversion in (I), leading to an extended conformation, but is disposed about a twofold axis in (II), leading to a curved conformation. While to a first approximation the coordination geometry in (II) matches that in (I), some differences are apparent. Each dithiocarbamate ligand coordinates asymmetrically with Á(Zn-S) = 0.26 and 0.22 Å , respectively, and these differ-ences are reflected in the associated C-S bond lengths with those associated with the weakly coordinating sulfur atoms being significantly shorter than those associated with the more tightly bound sulfur atoms, Table 1. There is also a significant difference in the coordination geometry defined by the NS 4 donor set with = 0.76. This difference arises from a reduction, by approximately 25 , of the angle subtended at the zinc atom by the more tightly bound sulfur atoms, Table 1. The change in coordination geometry is reflected in the relatively wide dihedral angle between the chelate rings of 59.41 (3) .
The common feature of (I) and (II) is the relatively long central sp 2 -C-C(sp 2 ) bond, Table 1. This feature for these ligands is well established and is reflected by comparable bond lengths determined by experiment and theory for the two polymorphs known for the uncoordinated ligand, 3 LH 2 (Jotani, Zukerman-Schpector et al., 2016). Interestingly, in one of the polymorphs, both independent molecules are disposed about a centre of inversion and adopt an anti-periplanar form, as in (I), while in the second polymorph, the molecule is twofold symmetric with a U-shaped conformation, i.e. is synperiplanar, as in (II). Computational chemistry indicated no significant energy difference between the two conformations, a result consistent with the literature expectation for the majority of conformational polymorphs (Cruz-Cabeza et al., 2015).
Supramolecular features
The presence of solvent DMF molecules in the crystal of (I) precludes supramolecular self-association between the amide functionality. Instead, three-molecule aggregates are generated via amide-N-HÁ Á ÁO(DMF) hydrogen bonds, Fig. 2a and Table 2. These aggregates are further linked via DMF-C-HÁ Á ÁO(amide) and pyridyl-C-HÁ Á ÁO(DMF) interactions, leading to eight-membered {Á Á ÁOC 2 NHÁ Á ÁOCH} and sevenmembered {Á Á ÁOÁ Á ÁHNC 3 H} synthons, respectively. Connections between these aggregates are of the type methyl-C-HÁ Á Á, where the -systems are either the pyridyl ring or one of the chelate rings. Referring to the latter, such C-HÁ Á Á(chelate) ring interactions are more and more being observed in the structural chemistry of metal dithiocarbamates owing, no doubt, to the effective chelating ability of dithiocarbamate ligands, which leads to significant -electron density within the chelate rings they form (Tiekink & Zukerman-Schpector, 2011;Tiekink, 2017). The net result of the foregoing is a three-dimensional architecture, Fig. 2b. From the view down the b axis, Fig. 2c, there are obvious areas with little or no directional interactions between the residues.
By contrast to the myriad of supramolecular associations identified in the crystal of (I), only conventional amide-N-HÁ Á ÁO(amide) hydrogen bonding is found in the crystal of (II), Table 3, with no other specific interactions identified based on the distance criteria in PLATON (Spek, 2009). The hydrogen bonding leads to linear supramolecular chains along the c axis, Fig. 3a, with alternate binuclear molecules lying above and below the plane defined by the supramolecular tape Table 1 Geometric data (Å , ) for (I) and (II).
Database survey
The investigation of zinc(II) dithiocarbamates, Zn(S 2 CNRR 0 ) 2 , with at least one of R/R 0 being CH 2 CH 2 OH, has lead to an interesting array of structures owing to hydrogen bonding. Thus, hydroxy-O-HÁ Á ÁO(hydroxy) hydrogen bonding links otherwise molecular species into supramolecular chains in the cases of Zn[S 2 CN(R)CH 2 CH 2 OH] 2 (pyridine)Ápyridine for R = Me and Et (Poplaukhin & Tiekink, 2017) and Zn[S 2 CN(Me)CH 2 -CH 2 OH] 2 (3-hydroxypyridine) and supramolecular layers via hydroxy-O-HÁ Á ÁS(dithiocarbamate) hydrogen bonds in Zn[S 2 CN(i-Pr)CH 2 CH 2 OH] 2 (2,2 0bipyridine) (Safbri et al., 2016); the propensity for the hydroxy group in dithiocarbamate ligands with R = CH 2 CH 2 OH to form O-HÁ Á ÁS rather than O-HÁ Á ÁO hydrogen bonds has been summarized recently (Jamaludin et al., 2016). With potentially bridging ligands, mixed results have been observed in recent studies: in terms of potentially tetra-coordinate urotropine (hexamethylenetetraamine, hmta), monodentate coordination has been found in each of the four independent molecules comprising the asymmetric unit of Zn[S 2 CN(i-Pr)-CH 2 CH 2 OH] 2 (hmta) (Câ mpian et al., 2016). Supramolecular layers are sustained by hydroxy-O-HÁ Á ÁO(hydroxy) and hydroxy-O-HÁ Á ÁS(dithiocarbamate) hydrogen bonding, as per above, augmented by hydroxy-O-HÁ Á ÁN(hmta) hydrogen bonding. Bidentate bridging has been found in 2:1 adducts of Zn[S 2 CN(CH 2 CH 2 OH) 2 ] 2 } 2 with pyrazine (Jotani et al., 2017) and 4,4 0 -bipyridyl (Benson et al., 2007) in which three-dimensional architectures are sustained by hydroxy-O-HÁ Á ÁO(hydroxy) hydrogen bonding. Apart from the interwoven polymers discussed in the Chemical context, the most closely related compounds to the title compounds are thioamide analogues of 3 LH 2 , i.e. 3 LSH 2 . Some interesting crystal chemistry occurs when {Zn[S 2 CN(Me)CH 2 CH 2 OH] 2 } 2 -( 3 LSH 2 ) is recrystallized from acetonitrile (Poplaukhin et al., 2012). Upon prolonged standing, a one molar ratio of S 8 , a decomposition product, is incorporated in the co-crystal with hydroxy-O-HÁ Á ÁO(hydroxy) hydrogen bonding leading to a two-dimensional array. When DMF is diffused into an acetonitrile solution of the same compound, one hydroxy group hydrogen bonds to the DMF-O while the other hydroxyl group self-associates to form a supramolecular chain. In the present study, when additional hydrogen-bonding functionality is not present, the amide groups are able to self-assemble as shown in Fig. 3. With the foregoing in mind, i.e. variable coordination geometries, flexible conformations of the bridging ligands and different hydrogen-bonding potential, more systematic studies in this area are warranted.
Synthesis and crystallization
Crystals of (I) were grown from liquid diffusion of ether into a 1:1 molar ratio of Zn(S 2 CNMe 2 ) 2 and 3 LH 2 in DMF; m.p. 479-481 K. Crystals of (II) were grown from the slow evaporation of a 2:1 molar ratio of Zn[S 2 CN(n-Pr) 2 ] 2 and 3 LH 2 in a MeOH/ EtOH solution.
Refinement
Crystal data, data collection and structure refinement details are summarized in Table 4. For each of (I) and (II), carbonbound H atoms were placed in calculated positions (C-H = 0.95-0.98 Å ) and were included in the refinement in the riding-model approximation, with U iso (H) set to 1.2-1.5U eq (C). The N-bound H atoms were located in difference-Fourier maps but were refined with a distance restraint of N-H = 0.88AE0.01 Å , and with U iso (H) set to 1.2U eq (N). Owing to poor agreement, two reflections, i.e. (356) and (014), were omitted from the final cycles of refinement of (I).
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
x y z U iso */U eq | 2,772 | 2017-09-19T00:00:00.000 | [
"Chemistry"
] |
EFFECT OF IMPORT TARIFF IMPLEMENTATION POLICY ON REFINED SUGAR PRODUCT COMPETITIVENESS IN INDONESIA
This research is set out to determine the effect of welfare distribution the respect to import duty on the government revenue, consumer expenditures, producer revenues, and efficiency losses (in production, in consumption and net effect), and the level of competitiveness of cane sugar in Indonesia by calculating the Domestic Resource Cost (DRC). The research used secondary data from related preceding researches and other references such as magazines, journals, bulletins and the like. The research result showed that the government revenue, change of consumer surplus, producer surplus, economic net loss in production and consumption and exchange gain economization, are influenced by the import tariff and elasticity price toward supply and demand. It also showed that sugar product competitiveness in Indonesia is higher than the same product from other countries as the value of DRC is less than one. Key word: Sugar, Welfare Distribution, Domestic Resource Cost (DRC), Import tariff. INTRODUCTION The Indonesian government through the Ministry of Commerce and Industry-via letter number: 364/MPP/Kep/8/1999, has carried out import commerce policy. This policy states that public importers are allowed to import sugar. The main goal of import duty implementation is to reduce sugar import volume in order to protect domestic producers against the cheaper foreign sugar products. The admission charge of import applied is ad valorem (i.e. the percentage of the import duty is fixed to all imported commodities). The policy of sugar commerce, rise of competitiveness and efficiency of sugar production are noticeably prioritized to reduce import quota and economize its exchange. The implementation of its commerce is required to support Indonesian government’s plans to protect all economic agents. It is expected to result positively in every unit of economic agents’ welfare, particularly of producers and consumers’ welfare as well as the sugar product competitiveness in international commerce.With reference to this background the problems under focus are as follows: (i) how sugar commerce policy influence the units of economic agents’ welfare such as producers, consumers and government; and (ii) how competitive the national sugar is compared to international commerce in order to reduce import quota. Based on these, the present research was initiated to investigate the following issues: (a) the effect of welfare distribution referring to the import duty on government revenue, changes of consumer and producer surpluses and efficiency losses (in terms of production, consumption, and net effect) and (b) sugar product competitiveness in Indonesia by calculating the Domestic Resource Cost (DRC). The results of this research are expected to be useful to the Indonesian government in formulating policy on sugar production.
INTRODUCTION
The Indonesian government through the Ministry of Commerce and Industry-via letter number: 364/MPP/Kep/8/1999, has carried out import commerce policy.This policy states that public importers are allowed to import sugar.
The main goal of import duty implementation is to reduce sugar import volume in order to protect domestic producers against the cheaper foreign sugar products.The admission charge of import applied is ad valorem (i.e. the percentage of the import duty is fixed to all imported commodities).The policy of sugar commerce, rise of competitiveness and efficiency of sugar production are noticeably prioritized to reduce import quota and economize its exchange.The implementation of its commerce is required to support Indonesian government's plans to protect all economic agents.It is expected to result positively in every unit of economic agents' welfare, particularly of producers and consumers' welfare as well as the sugar product competitiveness in international commerce.With reference to this background the problems under focus are as follows: (i) how sugar commerce policy influence the units of economic agents' welfare such as producers, consumers and government; and (ii) how competitive the national sugar is compared to international commerce in order to reduce import quota.Based on these, the present research was initiated to investigate the following issues: (a) the effect of welfare distribution referring to the import duty on government revenue, changes of consumer and producer surpluses and efficiency losses (in terms of production, consumption, and net effect) and (b) sugar product competitiveness in Indonesia by calculating the Domestic Resource Cost (DRC).The results of this research are expected to be useful to the Indonesian government in formulating policy on sugar production.
RESEARCH METHODOLOGY
This Research started with data collection from secondary sources.The researcher employed the library research method which involves collecting data from related preceding researches and other references such as magazines, journals, bulletins and the like.Data were also collected from the statistic bureau, Indonesian Sugar Statistic and Development Center (P3GI), Logistics Affair Agency (BULOG) and other related institutions.
P r i = output on border price (FOB price).
P r j= input on border price (CIF price)
RESULTS AND DISCUSSION
Refined sugar is one of the primary needs of Indonesian people.The need for refined sugar keeps rising continually along with the Indonesian population and income growth.For detailed information, the data on sugar industry in Indonesia are presented in Table 1.
Data in Table 1 showed that within 18 years, the total sugar production increased.However, the increase was lower than the entire national sugar demand.The increasing national sugar production did not meet the sugar demand in Indonesia since sugar supply and import were not adequate.The trend in sugar cane production appears to synchronize with the area of land allocated to its production.In 1990, sugar productivity reached 75.70 tons per hectare on the average, and it declined to 74.58 tons per hectare in 2008.
At present, Sugar production in Indonesia is incredibly centralized in Java.This Island is inhabited by almost 67 percent of the total Indonesian population and possesses the largest consumer contribution (almost 75 percent) of the entire domestic sugar production.In 2006/2007, the total sugar cane production in Java recorded 27.9 million tons (74.9%) production outside Java recorded 9.6 million tons (25.1%).In the 2007/2008 annual planting period, the total sugar cane production declined to 23.8 million tons (72.5%) in Java and declined to 8.5 million tons (27.5%) outside Java.
On The result of applying import tariff toward the welfare of producer, consumer and government are presented in Table 2.By using the data in Table .1,the impact of import tariff implementation is simulated in 25%, 30%, 40%, 60%, 100% and 120% toward the welfare of producers, consumer, and government.If the government's intervention in terms of import tariff implementation is denoted with 25 percent, it results in reducing consumer's welfare measure with the decrease of consumer surplus estimated about 2536.932 billion.The loss in consumer surplus is then distributed to the additional producer's surplus of about IDR 1905.842 billion (74.52%), runs the economy inefficiency from producer sector about IDR 37.183 billion (1.46%), and contributes about IDR.590.087billions to the government revenue (23.26%).The import tariff policy is supposed to economize the exchange gain to the tune of about IDR 1220.725billions.This calculation is made with reference to the Nominal Protection Coefficient (NPC) estimation of about 1.55, and price elasticity to the national supply and demand is 0.025 and -0.119, respectively.With sugar import in 2008, it is estimated to 1.443.000tons with the exchange rate at IDR 10.000/1 US $. 37,183853,5446 95,1904214,178 594,940 856,714 3. WGp 1905,842 2286,508 3047,663 4569,753 7614,773 9137,964 4. WGc -2536,932 -3053,243 4094,788 -6213,574 10593,933 -12855,505 5. GR 590,087 708,105 944,140 1416,210 2360,349 2832,419 6. FE 220,725 1226,894 1239,233 1263,909 Table 2 shows that the higher the import tariff, the higher efficiency losses.This is obviously indicated by net effect value which gradually rose from IDR 41.008 billion with 25% import tariff to IDR 885.122 billion with 120% ( more than 2000 percent increase).
The government effort to reduce this net effect of efficiency losses on producer sector is diverting some of government revenue (import tax) to increase production efficiency, particularly with respect to the cost of technology development.It can be applied at the farmers level, in which most workers are involved.The forms of the technology development would involve introducing the best seeds, better technique of planting, production facilities (fertilizer, tools, and chemical products) in line with the local needs.
The implementation of import tariff in consumer level particularly for underprivileged farmers can be solved by giving them subsidies.In this case, the government needs to apply two price system (protection price for producers and subsidized price for poor consumers).Import duty policy aimed at protecting domestic sugar production in a short term period is reasonable.However, in a long-term period this policy will not be applicable.Besides it inflicts not only in consumers financial loss (they must pay higher price), but also on the domestic sugar industry which in turn remains inefficient for being repeatedly protected.Moreover, in free trading era this situation is not applicable for longterm period.Import duty should decline gradually.As a result, it enables domestic sugar producers to renew their production system with the intention to improve efficiency and competitiveness compared to the foreign sugar industry.
If the goal of import tariff implementation is to stabilize domestic price, it may not be effective because foreign price change will directly be transmitted to domestic price.If scarcity of domestic sugar commodity occurs, it will result in high price difference between domestic and foreign price, and a high import tariff will result in smuggling.Therefore, the import tariff influencing sugar commodity price in Indonesia should be controlled by the government.
Table 3 presents the welfare distribution with assumption that price elasticity toward supply is about 0,41 and toward demand is about -0,45.
The DRC value was calculated using the input and output value rates of cane sugar production in East java particularly in dry and wet fields in the annual planting period 1990/2008.This Cane sugar production in East Java constitute the basis of production cost considering that these fields are the biggest ones in Indonesia.As a result, calculating the DRC value for sugar cane planted in dry and wet fields adequately represent the national calculation.
The results showed that DRC value of sugar cane planting in wet and dry fields is less than 1 (DRC=0.860 in wet field, and DRC=0.700 in dry field) implying that sugar product competitiveness in Indonesia is higher than the related product from other countries.Therefore, the hypothesis that sugar product competitiveness in Indonesia is lower than the same product from other countres cannot be accepted.This calculation result is in line with Ratnawati (2006), who found that coefficient value of domestic resource cost (DRC), planted either in dry field or in wet field, is less than 1 (DRC<1).This indicates that Cane sugar producers planting cane either in wet field or in dry field is economically efficient in using domestic resources.In other words, sugar cane producers get the advantage to produce sugar product in order to fulfill the import substitution.For Indonesia, It had better produce local sugar commodity in the season of devise saving than import sugar.But in fact, the demand has not been fulfilled, hence, to support local sugar in Indonesia the government needs to import sugar.
Item
Tariff 25% Tariff 30% Tariff 40% Tariff 60% Tariff 100% Tariff 120% Based on DRC value above, it cannot directly be interpreted.It must be noticed that there are lots of sugar factories in Indonesia (especially in East Java) have been aged and no longer efficient.The inefficient old small factories should be replaced as they reduce national sugar output and competitiveness Besides, it is necessary to consider how to relocate sugar factories outside Java for cane area since they are continually limited, and sugar product is to compete with other commodity (especially rice).In Java, cane planted in wet field cannot compete with other plants.This case occurs because the income from cane plants is relatively lower, and cane plants need higher cost of production and longer time.From counting on DRC value above, it appears that the efficiency and sugar production competitiveness in dry field is higher than in the wet fields.
economic loss in consumption (NELc): NELc=0,5ep {(NPC-1) 2 / (NPC) 2 } x W' 5. Net Economic Loss in Production (NELp) : NELp = 0,5 es {(NPC-1 ) 2 /(NPC) 2 } x V' 6. Change of consumer surplus (PWGc): PWGc = -{(NPC-1)/(NPC)x W' } + NELc} 7. Change of producer surplus (PWGp) PWGp= {(NPC-1)/(NPC)x V' Indonesia showed that sugar cane planting was still concentrated in java.It is approximately 285.026 ha or about 68.15 percent of the entire sugar cane planting area.Since 1999, the area of sugar cane field has declined.In that year, 152.305 ha of sugar cane area decreased to 118.188 ha or 22.44 percent by 2005/2006 annual planting period.On the contrary, dry field increased from 84.387 ha to 126.303 ha equal to 49.67 percent (Java) and 1.407 ha to 10.607 ha equal to 653.87 percent (outside java) (P3GI 2008).Furthermore, P3GI release that the problem is the sugar cane plants in dry filed has 31.60%productivity lower than in wet fields.As a consequence, sugar cane crystal level (rendement) in dry fields is factually lower than in wet fields.Ratnawati (2006) stated that the rendement in dry and wet fields are 7.59% and 8.12%, respectively.Moreover, sugar cane in dry fields requires high farming cost with different technology and production cost per kilogram and is more expensive since the sugar factory is locationed of a good distance.Nevertheless, in dry fields, sugar cane provides much benefit with minimal effort. | 3,057.6 | 2010-02-24T00:00:00.000 | [
"Economics",
"Agricultural and Food Sciences"
] |
DIRECT: Direct and Indirect Responses in Conversational Text Corpus
We create a large-scale dialogue corpus that provides pragmatic paraphrases to advance technology for understanding the underlying intentions of users. While neural conversation models acquire the ability to generate fluent responses through training on a dialogue corpus, previous corpora have mainly focused on the literal meanings of utterances. However, in reality, people do not always present their intentions directly. For example, if a person said to the operator of a reservation service ‘I don’t have enough budget.’, they, in fact, mean ‘please find a cheaper option for me.’ Our corpus provides a total of 71 , 498 indirect–direct utterance pairs accompanied by a multi-turn dialogue history extracted from the MultiWoZ dataset. In addition, we propose three tasks to benchmark the ability of models to recognize and generate indirect and direct utterances. We also investigated the performance of state-of-the-art pre-trained models as baselines.
Introduction
We create a large-scale dialogue corpus that discloses users' hidden intentions to advance techniques for natural language understanding in dialogue systems. Neural conversation models have been able to generate high-quality responses (Zhao et al., 2020; and achieve dialogue state tracking (Hosseini-Asl et al., 2020;Lin et al., 2020). These previous studies have been based on the literal meanings of user utterances. Little attention has been given to the implied intention of the utterances considered.
However, during conversation, humans often respond to others with indirect expressions, without directly telling them their requests or intentions (Searle, 1979;Brown et al., 1987). When humans receive an indirect response, they infer the intention implied in the response based on context, such as dialogue history. For example, in the example Would you like to make a reservation for this restaurant?
I don't have enough budget... of operator-user dialogue in Figure 1, the user responds with 'I don't have enough budget' to the operator's utterance of 'Would you like to make a reservation for this restaurant?' If the operator only considers the literal meaning, they would repeat the question. However, based on the dialogue history, the operator would infer that the user wants a cheaper restaurant and thus suggest an option to satisfy the user's preference. Our experiments revealed that even a state-of-the-art dialogue system (Yang et al., 2021) degrades the quality of response generation for indirect utterances. Such a pair of user utterances and hidden intentions is categorized into the class of pragmatic paraphrases, which emerge in conversations depending on the context. To realize dialogue systems for communicating with users at the human level, the systems should process the pragmatic paraphrases to address the true intentions of the user.
There is a low priced Italian restaurant
In this study, we release 1 a corpus of direct and indirect responses in conversational text, DIRECT, which contains 71, 498 pairs of indirect and direct responses. We expand the commonly used dialogue corpus of MultiWoZ (Eric et al., 2020), a multi-domain and multi-turn task-oriented dialogue corpus. The MultiWoZ corpus is created using the Wizard-of-Oz method, in which the user and system speak alternately. For each user's utterance, we use crowdsourcing to collect 'an utterance that is more indirect than the original utterance' and 'an utterance that is more direct than the original utterance'. Hence, DIRECT provides triples of paraphrases: original utterances, indirect utterances, and direct utterances.
We designed three benchmark tasks using this corpus to evaluate the model's ability to recognize and generate pragmatic paraphrases. As baselines, we investigated the performance of state-of-the-art pre-trained models, BERT (Devlin et al., 2019) and BART (Lewis et al., 2020), for benchmark tasks.
Related Work
Paraphrases have been applied in a dialogue system's research in the context of data augmentation (Hou et al., 2018;. Despite its importance in understanding users' intentions, the pragmatic paraphrases have been overlooked. Only a few recent studies have focused on pragmatic paraphrases to advance the understanding of users' intentions. Pragst and Ultes (2018) proposed a rule-based approach to automatically construct a corpus consisting of pairs of indirect and direct utterances. They demonstrated that the neural conversation model could accurately extract utterances with opposing directness. Because of their rulebased approach, patterns of indirect/direct utterances in their corpus are limited. Louis et al. (2020) used crowdsourcing to build a corpus comprising indirect answers to Yes/No questions, annotating whether the answers were Yes or No. This corpus provides natural answers written by crowdsourcing workers; however, it is limited to context-free Yes/No questions. In contrast to these studies, DI-RECT provides natural utterances written by humans with rich dialogue histories. Furthermore, it covers various types of utterances.
While there are several paraphrase corpora (Dolan and Brockett, 2005;Lan et al., 2017), all have focused on context-free paraphrases. Hence, none provide pragmatic paraphrases that emerge in contexts. Corpora for natural language inference are also relevant to our study (Giampiccolo et al., 2007;Marelli et al., 2014;Bowman et al., 2015). Similar to the paraphrase corpora, they do not provide contexts. This means that these corpora rely on world knowledge to deter-mine whether a text entails a hypothesis. In contrast, context is a crucial element in determining paraphrasal relationships in pragmatic paraphrases. Our DIRECT is the first corpus that provides largescale pragmatic paraphrases. It would be a valuable resource also for research on paraphrase identification and generation to make a step forward from literal paraphrases.
DIRECT Corpus
A pragmatic paraphrase is a pair of texts that have equivalent outcomes in a given context, which frequently emerge in conversations. Expanding a dialogue corpus is a promising approach for building a corpus that collects such pragmatic paraphrases as such a corpus is conversational by nature and often provides conversation histories. Specifically, we employed MultiWoZ2.1 (Budzianowski et al., 2018;Eric et al., 2020) and collected pragmatic paraphrases using crowdsourcing.
We describe how we collected pragmatic paraphrases in Section 3.1 with careful quality control as described in Section 3.2. Section 3.3 describes the statistics of the collected corpus. Section 3.4 presents a comparative analysis between our corpus and existing paraphrase corpora using conventional paraphrase identification models.
Direct and Indirect Response Collection
MultiWoZ is a multi-domain, task-oriented dialogue corpus annotated with dialogue act tags and dialogue states, comprising 10, 438 dialogues. Each dialogue involves alternate utterances by a user and system; the total number of utterances is 71, 524.
We used Amazon Mechanical Turk 2 , a crowdsourcing service, to expand MultiWoZ with pragmatic paraphrases. The workers first received instructions, as presented in Table 1, and some examples of the task. Then, the workers were shown dialogue histories extracted from MultiWoZ, as illustrated in Figure 2. 3 Based on the given conversation histories, the workers input indirect and direct responses that have the same intent as the specified user response in the dialogue (written in red in Figure 2) into the input forms at the bottom.
Instructions
Read the following dialogue between the USER and the OPERATOR, please rephrase the USER's response written in red letters into two different types of speech, following the instructions below.
Type-1 (Direct) : a more direct response that expresses the same intention as the original response Type-2 (Indirect) : a more indirect but natural response that expresses the same intention as the original response 'Indirect response' means, for example, a response to a Yes/No question that does not contain a 'Yes' or 'No', or a response that does not directly refer to the action you want the other person to do or your desire. If you have trouble rephrasing, click the 'Hints' button. You can see the goals that 'USER' must achieve in that interaction. We assumed that workers should be able to develop indirect and direct responses based only on the dialogue histories. If they did not understand the intent of the utterance that they were required to paraphrase, we provided an option to refer to the goal of the user as 'Hints' (upper part of Figure 2). Such goals were extracted from the MultiWoZ.
We targeted the utterances of the 'user' in Multi-Woz for paraphrasing because users primarily express their needs and preferences. We assumed the average time per post to be 1 minute and set the average reward at 0.12 USD (7.2 USD per hour).
As a result, we collected 71, 498 indirect-direct pairs. We divided the corpus into training and test data in the same manner as in the settings of Mul-tiWoZ. Note that our corpus is a parallel corpus, comprising indirect and direct responses, but it can also be used with the original MultiWoZ responses, i.e., triples of indirect, original, and direct responses are also extractable.
Examples Table 2 presents the examples of the collected pragmatic paraphrases. In the upper example, the user asks for a restaurant in a moderate price range. The indirect response is 'I don't want to overspend but remember its also vacation,' which requires an understanding that 'its also vacation' is a paraphrase of 'not too cheap', as explicitly stated in the direct response in this context. In the lower example, the phrase 'Do you know of any in town?' in the indirect response paraphrases 'Can you find me a guesthouse...?' in this context.
Quality Control
We carefully created the DIRECT corpus to collect high-quality pragmatic paraphrases by pre- Figure 2: User interface shown to crowdsourcing workers to generate indirect and direct paraphrases screening workers. We also conducted a quality assessment.
Worker Selection Prior to formal data collection, we carefully selected crowd workers to avoid trivial paraphrases by replacing or shuffling some words. Specifically, we posted a pilot consisting of 2 tasks. We automatically rejected workers whose average word-level Jaccard index between indirect and direct responses exceeded 0.75. We also manually observed sampled paraphrases. We then chose workers to ask for actual tasks that passed these automatic and manual quality assessments. In total, we obtained 536 workers to exclusively complete the tasks.
Quality Assessment After completing paraphrase collection, we used the same crowd workers to assess the quality of the collected pragmatic paraphrases for 7, 372 dialogues from the test set. We showed the workers utterances for assessment with their dialogue histories. The paraphrased utterances, presented as Response-A and Response-B, were also shown to the worker, of which indirect or direct labels were closed. Using a binary label, the workers first judge whether paraphrased utterances have the same intention as an original utterance. The workers then determined whether Response-A or Response-B was more direct. If the workers could not make a decision, they were allowed to choose a 'no difference' label.
We assumed the average time per post to be 30 seconds, and set the reward at 0.06 USD (7.2 USD per hour). Five workers were assigned to each paraphrase and the final label was decided via majority voting. Note that in this assessment task, a worker was assigned paraphrases generated by another worker to avoid self-evaluation. The assessment results are listed in Table 3. Intentionaccuracy is the percentage of collected responses that were judged to have the same intention as the original response. Intention-accuracy for both indirect and direct paraphrases is 95.0% and 99.7%, respectively, indicating that the collected sentences preserve the same intent of the original utterances.
The intention-accuracy of indirect responses was 4.7% lower than direct responses. This is expected because these utterances indirectly represent users' intentions, which makes the utterance more or less ambiguous.
Directness-accuracy is the percentage of direct responses judged as 'direct' by the worker. The accuracy was as high as 81.4%. The DIRECT corpus also provides these assessment labels.
Statistical Analysis
We reveal the characteristics of pragmatic paraphrases in the DIRECT corpus using caseinsensitive token-level analyses. Table 4 presents the word-based statistics on our corpus (except the test data). 4 First, the vocabulary size of indirect responses was much larger than that of direct responses. This implies that even for utterances with the same intent, there are more diverse expressions in the indirect responses than in the direct responses. The average number of words in utterances was 15.59 for indirect utterances and 12.38 for direct utterances. Wilcoxon's test (Wilcoxon, 1945) confirmed that the difference in the average number of words in utterances was statistically significant at the level of 0.1%. Figure 3 shows the histogram of differences in lengths between indirect and direct responses, where the distribution spreads to both positive and negative ranges. This implies that simply shortening a sentence does not necessarily make an utterance more direct. Next, we investigate the number of words that need to be replaced to transform an indirect response into a direct one. We computed three metrics of 'Keep', 'Delete', and 'Add'. 'Keep' is the average number of words kept when rewriting indirect responses to direct, 'Add' is the number of words that need to be added and 'Delete' is the number of words that need to be deleted. Table 4 demonstrates that 'Keep' is smaller than 'Add' and 'Delete.' This indicates that more than half of the words need to be replaced to transfer an indirect response into a direct response.
Finally, Table 5 presents the top 20 most frequent trigrams that appear in indirect and direct responses. Indirect and direct responses use distinctive expressions. Frequent trigrams of direct responses contain verbs that directly convey what the user wants an operator to do, such as 'book' and 'find', as well as phrases that refer to specific objects, such as 'the reference number'. On the contrary, trigrams of indirect responses contain phrases of 'is there any' and 'I think that', which do not appear in the counterpart.
Model-based Analysis
In this section, we investigate how the DIRECT corpus differs from existing paraphrase corpora using state-of-the-art paraphrase identification models. First, we compute the cosine similarity between paraphrase pairs in DIRECT, MRPC (Dolan and Brockett, 2005), and Twitter URL Paraphrase corpus (Lan et al., 2017) using Sentence-BERT 5 (Reimers and Gurevych, 2019). Figure 4 shows the histograms, which confirms DIRECT provides more paraphrase pairs with lower cosine similarities than the MRPC and Twitter URL Paraphrase corpus. Sentence-BERT is expected to address the literal meaning of a sentence through its pre-training via STSBenchmark (Cer et al., 2017). The large volume of paraphrases with lower similarities confirms that DIRECT provides paraphrases beyond literal similarity.
Next, we investigate whether a paraphrase identification model trained on existing paraphrase corpora transfers to DIRECT. Specifically, we calculated the percentage of paraphrase sentence pairs that are recognized as paraphrases using the paraphrase identification model. Figures 5 and 6
Benchmark Tasks
We designed three tasks using the DIRECT corpus to benchmark the models' ability to handle pragmatic paraphrases. Specifically, we design an indirect-to-direct transfer task (Section 4.1), directto-indirect transfer task (Section 4.2), and directness prediction task (Section 4.3). We also evaluated state-of-the-art pre-trained models for these tasks as baselines.
Indirect-to-direct Transfer Task
Task Description and Motivation Indirect-todirect transfer is the task of transforming an indirect response into a direct response while preserving its intent under the context, i.e., the dialogue history. This task allows the ability of a certain model to accurately interpret the intent of an indirect response to be evaluated. A possible application of this task is the pre-editing of utterances for task-oriented dialogue systems. By transforming the user's indirect utterances into direct utterances that are easier to interpret before inputting them into the model, the response generation quality is expected to improve.
Baselines We employed BART (Lewis et al., 2020) as the baseline model for this benchmark.
The architecture of the baseline model is shown in Figure 7 (a). We added new special tokens '<user>', '<system>', and '<query>' such that the model can distinguish between utterances in the dialogue history and the response to transform. We first added a '<query>' tag to the beginning of the indirect response to the transformation. Then, for the dialogue history, we added '<user>' and '<system>' at the beginning of user utterances and system utterances, respectively. These utterances are concatenated in the order of appearance in the dialogue history and input into the BART encoder. We fine-tuned the model using cross-entropy loss.
For implementation, we used the transformers (Wolf et al., 2020) library. The pre-trained model we used was 'facebook/bart-base'. 7 We used the AdamW (Loshchilov and Hutter, 2019) optimizer for training, with a learning rate of 2e-5. 8 The batch size was 8, owing to the GPU memory size. 9 We randomly sampled 2, 000 dialogues from the training data as a validation set and used the rest for training. Through 30 epochs of training, the model with the lowest validation loss was used to evaluate the test data.
We also constructed a model that disregards dialogue history (BART without history) to investigate the effects of context. In addition, we trained a transformer model (Transformer w/ history) from scratch on only the DIRECT corpus to investigate the effects of pre-training. The transformer model has the same architecture as the BART, comprising six self-attention layers of the encoder and decoder. Table 6 presents the BLEU score (Papineni et al., 2002) and Perplexity of each model. For the BART-based models, the model using dialogue history has a higher BLEU score, as expected, because pragmatic paraphrases are context-dependent. Comparing BART and transformer with dialogue history, the former largely outperformed the latter. This result confirms that pre-training is also crucial in this task.
Results and Discussion
Examples of generated direct responses are presented in Table 7. In the upper example, the BART model successfully interprets the phrase 'the best place' in the indirect response as an expression of the price range, while the transformer without pretraining failed in this interpretation. The context is particularly important in the indirect response in the lower example. In fact, the BART w/o history model generated a sentence with the opposite intent to the reference. Conversely, the two models that use dialogue history generate sentences with the same intent as the reference.
Direct-to-Indirect Transfer task
Task Description and Motivation Direct-toindirect transfer is a task, in contrast to the previous one, that transforms a direct response into an indirect response while preserving its intent. Miehle et al. (2018) have shown that there are approximately the same number of users of dialogue systems who prefer indirect responses as users who prefer direct responses. In addition, indirectly expressing requests to others is a polite strategy to save their face (Brown et al., 1987). Hence, for dialogue systems to have smooth and polite communication with humans, a technology to rephrase a direct response into an indirect one is also desired.
Baselines Similar to the setup in the indirect-todirect task, we used the BART model that takes a dialogue history as input as a baseline. The model architecture is shown in Figure 7 (b). We input the dialogue history and direct utterance into BART in the same manner as described in Section 4.1.
We also constructed a BART model that disregards dialogue history, as well as a transformer trained on the DIRECT corpus from scratch. The hyperparameters and training settings are the same as those in the indirect-to-direct task.
Results and Analysis Table 8 shows the BLEU and Perplexity. The fine-tuned BART models achieved higher BLEU scores and lower perplexity than the transformer without pre-training, which again confirms the effectiveness of pre-training. Overall, the performances of all models were lower in this task than in the indirect-to-direct transfer task. Moreover, dialogue history did not improve the BLEU score and perplexity in the BART model. These results imply that direct-to-indirect transfer is more difficult than the indirect-to-direct task. As our statistical analyses presented in Section 3.3, indirect responses have a larger vocabulary and are longer on average. Hence, we conjecture that even the fine-tuned BART model does not acquire the ability to properly transform the direct response into an indirect one, regardless of the availability of the dialogue history. As seen from Table 9, the response generated by all models failed to preserve the intent of the direct response. More sophisticated models are desired to achieve directto-indirect transformation.
Directness Prediction Task
Task Description and Motivation This task aims to estimate the degree of directness of an utterance. This technology allows the rephrasing of utterances predicted as indirect into direct utterances using an indirect-to-direct transfer model or by asking users to clarify their intentions before inputting the utterance into a dialogue system.
In DIRECT, each dialogue history has a triple response: the original response from MultiWoZ, an indirect response, and a direct response. These responses can be ordered in descending order of directness as direct, original, and indirect responses. In this task, a model takes a response as an input and predicts the degree of directness.
Baselines We employ BERT as the baseline, where a response to predict its directness and dialogue history is input. The architecture is shown in Figure 7 (c). The output of the final layer corresponding to the '[CLS]' token is input into a linear layer, followed by a sigmoid function. The final output is regarded as the score indicating the directness of the response.
As discussed in Section 3.3, there is a remarkable difference in the frequency of words between the direct and indirect responses. We also employ a simple bag-of-words-based linear regression model to investigate the usefulness of word-level features in predicting directness.
We use pointwise and pairwise settings to train the model, which are typically used in learning-to- (Mitra and Craswell, 2018). Pointwise loss minimizes the mean squared error between the prediction and gold-standard directness scores. As the gold-standard, we set 1.0, 0.5, and 0.0 for the direct, original, and indirect responses, respectively. Pairwise loss is designed such that the prediction score of a more direct response is larger than that of another. Suppose there is a direct response A and an indirect response B, whose predictions are s A and s B . The pairwise loss is defined as: − log 1 1 + e −(s A −s B ) As evaluation metrics, we compute the percentage of exact matches between a ranking based on the predicted scores and the gold standard. We also evaluate Kendall's tau between the prediction and gold standard and report the average.
We implemented the BERT-based model using 'bert-base-cased' with the transformers library, and the linear regression model using 'lin-ear_model.LinearRegression' with the scikit-learn (version 0.23.1). 10 We also constructed a model that disregards the dialogue history for comparison. Table 10 shows the results. The BERT models disregarding dialogue history achieved higher scores than the models that use dialogue history. As revealed in Section 3.3, indirect and direct responses have largely different vocabulary and phrases, which may be clues for predicting the degree of directness. Nonetheless, the percentage of an exact match remains at 0.814 and a more sophisticated model is desired to effectively employ the dialogue history. The exact match rate for the linear regression model (LR w/o history) was 0.540, which was much higher than the chance rate of 0.167. This indicates that the bag-of-words-based model is useful for predicting directness, although it is not as good as BERTbased models.
Results and Analysis
Finally, we evaluated the performance of the response generation for the indirect and direct re-sponses identified by the best model in Table 10. The model predicted 1, 842 responses as indirect and 5, 530 responses as direct. 11 We generated system responses to each of the indirect and direct user responses using the model proposed by Yang et al. (2021), which is the most advanced end-toend response generation model for task-oriented dialogues. The BLEU scores of the indirect and direct responses were 10.25 and 14.09, respectively. This result confirms that direct utterances are easier for the dialogue system to respond accurately, while indirect utterances are more difficult.
Conclusion
We created DIRECT, a dialogue corpus providing 71, 498 pairs of pragmatic paraphrases with context. In addition, we proposed three benchmark tasks and showed the performance of state-of-theart pre-trained models as the baseline.
In a future, we will apply DIRECT to a taskoriented dialogue system to handle indirect responses in an end-to-end manner. We also intend to investigate the relations between pragmatic paraphrases and other features of dialogue acts and belief states. | 5,580.4 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
One-loop non-renormalization results in EFTs
In Effective Field Theories (EFTs) with higher-dimensional operators many anomalous dimensions vanish at the one-loop level for no apparent reason. With the use of supersymmetry, and a classification of the operators according to their embedding in super-operators, we are able to show why many of these anomalous dimensions are zero. The key observation is that one-loop contributions from superpartners trivially vanish in many cases under consideration, making supersymmetry a powerful tool even for non-supersymmetric models. We show this in detail in a simple U(1) model with a scalar and fermions, and explain how to extend this to SM EFTs and the QCD Chiral Langrangian. This provides an understanding of why most"current-current"operators do not renormalize"loop"operators at the one-loop level, and allows to find the few exceptions to this ubiquitous rule.
Introduction
Quantum Effective Field Theories (EFTs) provide an excellent framework to describe physical systems, most prominently in particle physics, cosmology and condensed matter. With the recent discovery of the Higgs boson and the completion of the SM, EFTs have provided a systematic approach to smartly parametrize our ignorance on possible new degrees of freedom at the TeV scale. Any theory beyond the SM, with new heavy degrees of freedom, can be matched into an EFT that consists of operators built out solely with the SM degrees of freedom.
Recently, there has been much effort put into the determination of the one-loop anomalous dimensions of the dimension-six operators of the SM EFT [1,2,3,4,5]. This has revealed a rather intriguing structure in the anomalous-dimension matrix, with plenty of vanishing entries that are a priori allowed by all symmetries. Some vanishing entries are trivial since no possible diagram exist. Nevertheless, some of them show intricate cancelations without any apparent reason. Similar cancelations had been observed before in other EFTs (see for example [6,7]).
To make manifest the pattern of zeros in the matrix of anomalous dimensions, it is crucial to work in the proper basis. Refs. [2,3] pointed out the importance of working in bases with operators classified as "current-current" operators and "loop" operators. The first ones, which we call from now on JJ-operators, were defined to be those operators that can be generated as a product of spin-zero, spin-1/2 or spin-one currents of renormalizable theories [8,9,3], while the rest were called "loop" operators. 1 In this basis it was possible to show [2] that some class of loop-operators were not renormalized by JJ-operators, suggesting a kind of generic nonrenormalization rule. The complete pattern of zeros in the SM EFT was recently provided in Ref. [10] in the basis of [11], a basis that also maintains the separation between JJ-and loop-operators. A classification of operators based on holomorphy was suggested to be a key ingredient to understand the structure of zeros of the anomalous-dimension matrix [10].
In the present paper we provide an approach to understand in a simple way the vanishing of anomalous-dimensions. The reason behind many cancelations is the different Lorentz structure of the operators that makes it impossible to mix them at the one-loop level. Although it is possible to show this in certain cases by simple inspection of the one-loop diagrams, we present a more compact and systematic approach based on the superfield formalism. For this reason we embed the EFT into an effective superfield theory (ESFT), and classify the operators depending on their embedding into super-operators. Using the ESFT, we are able to show by a simple spurion analysis (the one used to prove non-renormalization theorems in supersymmetric theories) the absence, in certain cases, of mixing between operators of different classes. We then make the important observation that the superpartner contributions to the one-loop renormalization under consideration trivially vanish in many cases. This allows us to conclude that some of the non-renormalization results of the ESFTs apply to the non-supersymmetric EFTs as well. In other words, we will show that in many cases supersymmetry allows to relate a non-trivial calculation to a trivial one (that of the superpartner loops). This also provides a way to understand the few exceptions to the ubiquitous rule that JJ-operators do not renormalize loop-operators at the one-loop level.
The paper is organized as follows. In Sec. 2 we start with a simple theory, the EFT of scalar quantum electrodynamics, to illustrate our approach for obtaining one-loop nonrenormalization results. In later subsections, we enlarge the theory including fermions, and present an exceptional type of JJ-operator that renormalizes loop-operators. In Sec. 3 we show how to generalize our approach to derive analogous results in the SM EFT and we also discuss the holomorphic properties of the anomalous dimensions. In Sec. 4 we show the implications of our approach for the QCD Chiral Lagrangian. We conclude in Sec. 5.
2 Non-renormalization results in a U (1) EFT Let us start with the simple case of a massless scalar coupled to a U (1)-gauge boson with charge Q φ , assuming for simplicity CP-conservation. The corresponding EFT is defined as an expansion in derivatives and fields over a heavy new-physics scale Λ: L EFT = d L d , where L d denotes the terms in the expansion made of local operators of dimension d. The leading terms (d ≤ 6) in the EFT are given by where the dimension-six operators are We can use different bases for the dimension-six operators although, when looking at operator mixing, it is convenient to work in a basis that separates JJ-operators from loop-operators, as we defined them in the introduction. Using field redefinitions (or, equivalently, the equation of motion (EOM) of φ) we can reduce the number of JJ-operators to only two: for instance, It is convenient, however, to set a one-to-one correspondence between operators and supersymmetric D-terms, as we will show below. For this reason, we choose for our basis O 6 and O r . 2 The only loop-operator, Many of the one-loop non-renormalization results that we discuss can be understood from arguments based on the Lorentz structure of the vertices involved. Take for instance the nonrenormalization of O F F by O r . Integrating by parts and using the EOM, we can eliminate O r in favor of O r = (φD µ φ * ) 2 + h.c.. Now, it is apparent that O r cannot renormalize O F F because either φD µ φ * or φ * D µ φ is external in all one-loop diagrams, and these Lorentz 2 In the U (1) case we are considering, structures cannot be completed to form O F F . Since, in addition, there are no possible oneloop diagrams involving O 6 that contribute to O F F , we can conclude that in this EFT the loop-operator cannot be renormalized at the one-loop level by the JJ-operators. As we will see, similar Lorentz-based arguments can be used for other non-renormalization results. This approach, however, requires a case by case analysis and it is not always guaranteed that one can find an easy argument to see that the loop is zero without a calculation. In this paper we present a more systematic and unified understanding of such vanishing anomalous dimensions based on a superfield approach that we explain next.
We first promote the model of Eq. (1) to an ESFT and study the renormalization of the dimension-six operators in this supersymmetric theory. The superfield formalism makes it transparent to determine which operators do not mix at the one-loop level. Although in this theory the renormalization of operators involves also loops of superpartners, we will show in a second step that either the ordinary loop (involving φ and A µ ) is already trivially zero or it is the superpartner loops which trivially vanish. Therefore, having ensured that there are no cancellations between loops of ordinary matter and supermatter, we are able to extend the supersymmetric non-renormalization results to the non-supersymmetric case. In other words, the advantage of this approach is that we can turn a loop calculation with the ordinary φ and A µ into a calculation with superpartners, where the Lorentz structure of the vertex can make it easier to see that the one-loop contributions are zero.
The dimension-six operators of Eq. (2) can be embedded in different types of superoperators. As it will become clear in what follows, it is important for our purposes to embed the dimension-six operators into super-operators with the lowest possible dimension. This corresponds to an embedding into the highest θ-component of the super-operator (notice that we can always lower the θ-component by adding derivatives in superspace). This provides a classification of the dimension-six operators that is extremely useful in analyzing the one-loop mixings. Let us start with the loop-operator O F F . Promoting φ to a chiral supermultiplet Φ and the gauge boson A µ to a vector supermultiplet V , one finds that O F F can be embedded into the θ 2 -component (F -term) of the super-operator where we have defined V Φ ≡ 2Q φ V , W α is the field-strength supermultiplet, and we follow the notation of [12] (using a mostly-plus metric). Since the super-operator in Eq. (3) is non-chiral, the O F F cannot be generated in a supersymmetry-preserving theory at any loop order. For the embedding of the JJ-operators, the situation is different. Some of them can be embedded in a D-term (aθ 2 θ 2 -component), while for others this is not possible. In the example discussed here, we have and therefore O r is allowed by supersymmetry to appear in the Kähler potential and is notprotected from one-loop corrections. Nevertheless O 6 must arise from the θ 0 -component of the super-operator and then must be zero in a supersymmetry-preserving theory at any loop order. We can now embed Eq. (1) in a ESFT. We use a supersymmetry-breaking (SSB) spurion superfield η ≡ θ 2 (of dimension [η] = −1) to incorporate the couplings of Eq. (1) that break supersymmetry. We have 3 It is very easy to study the one-loop mixing of the dimension-six operators in the above ESFT using a simple η-spurion analysis. For example, it is clear that there cannot be renormalization from terms with no SSB spurions, such asc r , to terms with SSB spurions, such asc F F . Also, corrections fromc r toc 6 are only possible through the insertion of λ φ , that carries a ηη † . Similarly, terms with a SSB spurion η † cannot renormalize terms with two SSB spurions η † η, unless they are proportional to λ φ . This means thatc F F can only renormalizec 6 with the insertion of a λ φ . The inverse is however not guaranteed: terms with more SSB spurions can in principle renormalize terms with less spurions. For example,c F F , that carries a spurion η † , could generate at the loop level the operator whereÕ r = Φ † e V Φ Φ 2 and we have defined the gauge-covariant derivative in superspace. Therefore one has to check it case by case. For example,c 6 could in principle renormalizec F F , but it is not possible to write the relevant diagram since it involves a vertex with too many Φ's. This implies thatc F F is only renormalized by itself at the one-loop level. This simple renormalization structure is the starting point from which, by examining more closely the loops involved at the field-component level, we will derive the following non-renormalization results in the non-supersymmetric EFT of Eq. (1): Non-renormalization of O F F by O r : The differences between our original EFT in Eq. (1) and its supersymmetric version, Eq. (6), are the presence of the fermion superpartners for the gauge and scalar: the gaugino, λ, and "Higgsino", ψ. We will show, however, that the contributions from superpartners trivially vanish in the mixing of JJ-and loop-operators. In we have only the 3 terms shown that can potentially contribute to O F F at the one-loop level. These terms can be considered as part of a supersymmetric JJ-operator generated from integrating-out a heavy vector superfield that contains a scalar, a vector and a fermion. Other terms not shown in Eq. (8) involve too many fields (see Appendix) and therefore are only relevant for an analysis beyond one-loop. The first term of Eq. (8) can potentially give a contribution to O F F from a loop of φ's, while the second and third term could from a loop of Higgsinos. It is very easy to see that the loop of Higgsinos does not contribute to O F F . Indeed, if in the second term of Eq. (8) we close the Higgsinos in a loop, the current D µ φ is left as an external factor, and it is then clear that we can only generate the JJ-operator J µ J µ . Moreover, the third term of Eq. (8) vanishes by using the EOM: σ µ D µ ψ = 0 (up to gaugino terms that are not relevant here). Therefore, Higgsinos do not contribute at the one-loop level to the renormalization of the loop-operator O F F . We can then extend the non-renormalization result from the ESFT of Eq. (6) to the non-supersymmetric EFT of Eq. (1) and conclude that the loop-operator cannot be renormalized at the one-loop level by the JJ-operators.
Non-renormalization of O r by O F F : It remains to study the renormalization from O F F to O r . This can arise in principle from a loop of gauge bosons. In the supersymmetric theory, Eq. (6),c r does not carry any SSB spurion and therefore its renormalization byc F F cannot be prevented on general grounds, as we explained before. Nevertheless, we find that operators induced byc F F , through a loop of V 's, must leave an external factor η † Φ † e V Φ Φ from the vertex and then, the only operator that could potentially contribute toc r must have the form 4 1 From the EOM for Φ, we have thatD 2 Φ † = 0 up to λ φ terms that bring too many powers of Φ, so that the projection of Eq. (9) into O r vanishes. Finally, one also has to ensure that redundant JJ-super-operators, that can give Φ † e V Φ Φ 2 through superfield redefinitions, are not generated at the one-loop level. In particular, the redundant super-operator if generated at the loop level, can give a contribution toc r after superfield redefinitions, or equivalently, after using the EOM of V : We do not find, however, any non-zero contribution from η † (Φ † e V Φ Φ)W α W α to the operator in Eq. (10), as such contributions, coming from a V /Φ loop, must be proportional to η † W α Φ. 5 Having shown that supersymmetry guarantees zero contributions toc r fromc F F , we must check what are the effects of superpartner loops. From (see Appendix) , it is clear that a gaugino/Higgsino loop cannot give a contribution to O r : the second term of Eq. (11), after using the EOM for the gaugino, σ µ ∂ µ λ † = gφψ † , can only give a contribution proportional to |φ| 2 φ; while the contribution from the third term must be proportional to φ * F µν . None of them have the right Lorentz structure to contribute to O r . Therefore, we conclude that the loop-operator O F F can only renormalize at the one-loop level the JJ-operators that break supersymmetry, like O 6 , and not those that can be embedded in a D-term, like O r .
Including fermions
Let us extend the previous EFT to include two charged Weyl fermions, q and u, with U (1)charges Q q and Q u , such that Q φ + Q q + Q u = 0. We have now extra terms in the Lagrangian (respecting CP-invariance): 6 where f = q, u. The JJ-operators are Instead of O φf , we could have chosen the more common JJ-operator i(φ * ↔ D µ φ)(f †σµ f ) for our basis. Both are related by where the last term could be eliminated by the use of the EOM. Our motivation for keeping O φf in our basis is that, as we will see later, it is in one-to-one correspondence with a supersymmetric D-term. The only additional loop-operator for a U (1) model with fermions is the dipole operator Let us consider the operator mixing in this extended EFT. We will discuss all cases except those for which no diagram exists at the one-loop level. As we said before, in principle, many vanishing entries of the anomalous-dimensions can be simply understood from inspection of the Lorentz structure of the different vertices. For example, it is relatively simple to check that the JJ-operators O 4f and O φf do not renormalize the loop-operators. For this purpose, it is important to recall that we can write four-fermion operators, such as (q †σ µ q)(u †σµ u), in the equivalent form q † u † qu. From this, it is obvious that closing a loop of fermions can only give operators containing the Lorentz structure f † f or qu that cannot be completed to give a dipole operator (nor its equivalent forms, qσ µν σ ρ D ρ q † F µν or D µ φqD µ uH). For the case of O φf , the absence of renormalization of the dipole operator, as for example from diagrams like the one in Fig. 1, can be proved just by realizing that we can always keep the Lorentz structureσ µ D µ (φf ) external to the loop; this Lorentz structure cannot be completed to form a dipole operator. The contribution of O φf to O F F is also absent, as can be deduced from Eq. (14): the first term, after closing the fermion loop, gives the wrong Lorentz structure to generate O F F , while the second term gives an interaction with too many fields if we use the fermion EOM. Finally, O yu can only contribute to the Lorentz structure φqu, not to the dipole one in Eq. (15).
We can be more systematic and complete using our ESFT approach. Let us see first how the operators of Eq. (12) can be embedded in super-operators. By embedding q and u in the chiral supermultiplets Q and U , we find that the dipole loop-operator must arise from the θ 2 -term of a non-chiral superfield: Among the JJ-operators of Eq. (13), two of them can arise from supersymmetric D-terms and are then supersymmetry-preserving: and similar operators for Q → U , where we again use the short-hand notation V Q = 2Q q V . Nevertheless, one of the JJ-operators must come from the θ 2 -component of a non-chiral superfield that is not invariant under supersymmetry: We can now promote Eq. (12) to a ESFT: where F = Q, U . 3 . By simple inspection of these latter vertices, however, we find that neither of them is possible at the one-loop level. Therefore, in the ESFT the loop-operators are not renormalized at one-loop level by the JJ-operators.
To extend the above results to the non-supersymmetric EFT, we must ensure that these non-renormalization results do not arise from cancellations between loops involving "ordinary" fields (A µ , φ, q and u) and loops involving superpartners (λ, ψ,q andũ). This can be proved by showing that either the former or the latter are zero. In certain cases it is easier to look at the loop of ordinary fields, while in others it is easier to look at the superpartner loops. For example, we have (see appendix) where we see that a renormalization to O D can arise either from the first term (by a loop of "quarks" q) or the second and third term by a loop of "squarks"q. It is easier to see that the loops of squarks are zero: they can only generate operators containing q †σµ q or q †σµ ↔ D µ q, that do not have the structure necessary to contribute to the dipole operator O D nor to operators related to this one by EOMs, such as qσ µν σ ρ D ρ q † F µν . We could proceed similarly for the other operators. For the case of O φf , however, the one-loop contribution to O D contains scalars and fermions (see Fig. 1) and the corresponding graph with superpartners has a similar structure, and therefore is not simpler. Nevertheless, both can be showed to be zero by realizing thatσ µ D µ (φf ) can always be kept as external to the loop, and that this Lorentz structure cannot be completed to form a dipole operator. We can conclude that the absence of renormalization of loop-operators by JJ-operators valid in the ESFT also applies to the EFT.
Class of J J -operators not renormalized by loop-operators: Following the same approach, we can also check whether loop-operators can generate JJ-operators. Let us first work within the ESFT. We have shown already that the loop-super-operator η † (Φ † e V Φ Φ)W α W α cannot generate the JJ-super-operator (Φ † e V Φ Φ) 2 . The same arguments apply straightfor- . For the case of the dipole super-operator, η † Φ(Q ↔ D α U )W α , we have a potential contribution to Q † e V Q Q U † e V U U coming from a Φ/V loop. Nevertheless, as the factor η † Q ↔ D α U remains in the external legs, it is clear that such contribution can only lead to operators containing η † D 2 , which are not JJ-super-operators. Similarly, contributions to Φ † e V Φ Φ Q † e V Q Q could arise from a U/V loop, but one can always arrange it to leave either η † D α Φ or η † D α Q in the external legs 7 , which again does not have the structure of a JJ-super-operator (the same applies for Q ↔ U ). Finally we must check whether redundant JJ-super-operators, as the one in Eq. (10), can be generated by the dipole. Similar arguments as those below Eq. (10) can be used to prove that this is not the case. Notice, however, that we cannot guarantee the absence of renormalization by loop-super-operators neither of 3 . We then conclude that only the JJ-super-operators that preserve supersymmetry (with no SSB-spurions) are safe at the one-loop level from the renormalization by loop-super-operators.
It remains to show that this result extends also to non-supersymmetric EFT. From Eq. (41) of the Appendix, we have, after using the gaugino EOM and eliminating the auxiliary fields F i , that loops from superpartners can only give contributions proportional to φf f , |φ| 2 f , f f or F µν f (for f = q, u). None of these terms can lead to the Lorentz structure of O r , O 4f nor O φf . These are exactly the same JJ-operators that could not be generated (at one loop) from loop-operators in the ESFT.
An exceptional JJ-operator
Let us finally extend the EFT to include an extra fermion, a "down-quark" d of charge Q d , such that Q φ = Q q + Q d . The following extra terms are allowed in the Lagrangian: where we have the additional JJ-operators apart from operators similar to the ones in Eq. (12) with f including also the d.
Following the ESFT approach, we embed the d-quark in a chiral supermultiplet D and the operators of Eq. (21) into the super-operators: (23) 7 Using integration by parts and the EOM of V , we can write the dipole super-operator as Figure 2: Contribution to c yuy d proportional to y d y u .
As all of these operators come from a θ 2 -term of non-chiral super-operators, we learn that they can only be generated from supersymmetry-breaking. We can promote Eq. (21) into a ESFT in the following way: Now, and this is very important, when considering only d, q, φ in isolation (without the u fermion), we can always change the supersymmetric embedding of φ by considering φ * ∈Φ, whereΦ is a chiral supermultiplet of charge −1/2. By doing this, we can write the Yukawaterm for the d in a supersymmetric way, d 2 θ y dΦ QD, and guarantee that the renormalization of operators involving only φ, q, d is identical to the one of φ, q, u explained in the previous section.
It is then clear that supersymmetry breaking from Yukawas can only arise through the combination y u y d . This allows to explain why contributions to O yuy d from (q †σ µ q)(d †σµ d) must be proportional to y u y d , as explicit calculations have shown in the SM context [10]. In the ESFT, the operator (q †σ µ q)(d †σµ d) is embedded in a supersymmetry-preserving superoperator and therefore can only generate supersymmetry-breaking interactions, such as O yuy d , via the SSB couplings y u y d (see Fig. 2). The one-loop contributions from superpartners do not affect this result, as Eq. (20) shows that they are trivially zero.
The operators O yuy d and O y u,d are the only JJ-operators that are embedded in the ESFT with the same SSB-spurion dependence as the loop-operators -see Eq. (24). Therefore, they can potentially renormalize O D . Although this was not the case for O y u,d due to its Lorentz structure, as we explained above, we have confirmed by explicit calculation that O yuy d indeed renormalizes O D . This is then an exception to the ubiquitous rule that JJ-operators do not renormalize loop-operators.
Generalization to the Standard Model EFT
We can generalize the previous analysis to dimension-six operators in the SM EFT. We begin by constructing an operator basis that separates JJ-operators from loop-operators. We
SSB spurion
Super-operators We also distinguish those that can arise from a supersymmetric D-term (η 0 ) from those that break supersymmetry either by an spurionDαη † , η † , |Dαη † | 2 or |η| 2 . We denote by F a µν (F a µν ) any SM gauge (dual) field-strength. The t a matrices include the U (1) Y , SU (2) L and SU (3) c generators, depending on the quantum numbers of the fields involved. Fermion operators are written schematically with f = {Q L , u R , d R , L L , e R }. Right: For each operator in the left column, we provide the super-operator at which it is embedded. then classify them according to their embedding into a supersymmetric model, depending on whether they can arise from a super-operator with no SSB spurion (η 0 ), which therefore preserves supersymmetry, or whether they need SSB spurions, eitherDαη † , η † , |Dαη † | 2 or ηη † (that selects theθθ 2 , θ 2 ,θθ andθ 0 θ 0 component of the super-operator, respectively), or their Hermitian-conjugates. The supersymmetric embedding naturally selects a SM basis that we present in Table 1. In this basis, the non-renormalization results between the different classes of operators discussed in the previous section will also hold. The operator basis of Table 1 is close to the basis defined in Ref. [11]. One significant difference is our choice of the only-Higgs JJ-operators, that we take to be O ± and O 6 , and of the Higgs-fermion JJ-operator O Hf . As in the U (1) case, this choice is motivated by the embedding of operators into super-field operators, as we have just mentioned (see more details below). Concerning the classification of 4-fermion operators, our O 4f operators correspond not only to types (LL)(LL), (RR)(RR) and (LL)(RR) of Ref. [11], but also to the operator Q ledq = (L L e R )(d R Q L ) classified as (LR)(RL) in [11], since this latter can be written as a O 4f by Fierz rearrangement. Finally, our O yy operators correspond to the four operators of type (LR)(LR) in [11].
JJ-operators
To embed the SM fields in supermultiplets we follow the common practice of working with left-handed fermion fields so that Q L , u c R and d c R are embedded into the chiral supermultiplets Q, U and D (generically denoted by F ). With an abuse of notation, we use H for the SM Higgs doublet as well as for the chiral supermultiplet into which it is embedded. Finally, gauge bosons are embedded in vector superfields, V a , and we use the notation V Φ ≡ 2t a V a where t a include the generators of the SM gauge-group in the representation of the chiral-superfield Φ.
Concerning the embedding of operators into super-operators, there are a few differences with respect to the U (1) model discussed in the previous section, as we discuss below. Starting with the JJ-operators, we have a new type of operator not present in the This operator cannot be embedded as the others in a D-term due toH † H = 0 and must be embedded as a θ 2θ term of a spinor super-operator: For the JJ-operators involving only the Higgs field, there is also an important difference with respect to the U (1) case. We have now two independent operators, 8 but only one can arise from a supersymmetric D-term: 9 where with O r , O H and O T being the SM analogues of the U (1) operators, obtained simply by replacing φ by H. The other independent only-Higgs operator must arise from a SSB term. We find that this can be the θθ-component of the superfield We can write this operator in a superfield Lagrangian by using the spurion |Dαη † | 2 : where Concerning loop-operators, we have the new operators O 3F = f abc F a ν µ F b ρ ν F c µ ρ and O 3F = f abc F a ν µ F b ρ νF c µ ρ , possible now for the non-Abelian groups SU (2) L and SU (3) c , which again can only arise from a θ 2 -term: where we have defined O 3F ± = O 3F ∓ iO 3F . To contain O 3F + , Eq. (31) must then appear in the ESFT multiplying the SSB-spurion η † , as the rest of loop-operators. For the loop-operators O F F = H † t a t b HF a µν F b µν and their CP-violating counterparts, O FF = H † t a t b HF a µνF b µν , we can proceed as above and embed them together in the superoperators where
One-loop operator Mixing
It is straightforward to extend the U (1) analysis of section 2 to the operators of Table 1 to show that, with the exception of O yy , the JJ-operators do not renormalize the loop-operators. The only important differences arise from the new type of JJ-operators, O ud R and O − . Concerning O ud R , it is very simple to see that this operator cannot renormalize loop-operators (from a loop of quarks one obtains operators with the Lorentz structure (iH † D µ H); while the Higgs-loop gives operators containingd R γ µ u R , and none of them can be loop-operators). Concerning O − , we only need to worry about the renormalization of O F F . This can be studied directly in the ESFT, as superpartner contributions from JJ-operator to loop-operators can be shown to trivially vanish. In the ESFT, the operator O − is embedded in a super-operator containing the SSB-spurion |D α η| 2 . This guarantees the absence of renormalization of loop-super-operators as these latter contain the SSB-spurion η † . Besides this direct contribution, there is an indirect route by which O − could renormalize O F F : by generating O HF = i(D µ H) † t a (D ν H)F a µν which, via integration by parts, can give O F F . The operator O HF can come from the super-operator O HF =Dαη †Dα H † e V H D α H W α that in principle is not protected by a simple SSB-spurion analysis from being generated by super-operators ∝ |D α η| 2 . Nevertheless, contributions tõ O HF must come from Eq. (29) with derivatives acting on the two Higgs superfields external to the loop, and due to the derivative contractions, this can only giveDαη † D α ηDαH † D α HD β W β ; by the use of the EOM of V , however, this gives a JJ-super-operator and notÕ HF .
In the SM case, the exceptional O yy operators (than can in principle renormalize the dipole operators) are (following the notation in [3]) where r, s are SU (2) L indices and T A are SU (3) c generators. Although in principle all of these four operators could renormalize the SM dipoles, it is easy to realize that O yuye will not: the only possible way of closing a loop (Q L u R orL L e R ) does not reproduce the dipole Lorentz structure for the external fermion legs. One concludes that only the three remaining operators in Eq. (33) renormalize the SM dipole operators and we have verified this by an explicit calculation. These are the only dimension-six JJ-operator of the SM that renormalize loop-operators. Some of these exceptions were also pointed out in [4]. Our analysis completes the list of these exceptions and helps to understand the reason behind them. From the analysis of the U (1) case, we can also explain the presence of y u y d in the renormalization of O yy from O 4f [10]. It is obvious that no operator other than itself renormalizes O 3F + : no adequate one-loop 1PI diagram can be constructed from other dimension-six operators, since they have too many fermion and/or scalar fields. Nevertheless O 3F + can in principle renormalize JJ-operators. Let us consider, for concreteness, the case of O 3F + made of SU (2) L field-strengths. SM-loop contributions from O 3F + can generate the JJ-operators (D ν F a µν ) 2 and J a µ D ν F a µν (where J a µ is the weak current), and indeed these contributions have been found to be nonzero by an explicit calculation [5]. By using the EOM, D ν F a µν = gJ a µ , we can reduce these two operators to (J a µ ) 2 . Surprisingly, one finds that the total contribution from O 3F + to (J a µ ) 2 adds up to zero [5,10]. We can derive this result as follows. From inspection of Eq. (42), one can see that the superpartners cannot give any one-loop contribution to these JJ-operators. Therefore the result must be the same in the SM EFT as in the corresponding ESFT. Looking at the Higgs component of (J a µ ) 2 = (H † σ a ↔ D µ H) 2 + · · · , we see that this operator must arise from the ESFT term (D α ηJ a α +h.c.) 2 where J a α = H † σ a D α H. This super-operator, however, cannot be generated from the super-operator in Eq. (31), as this latter appears in the ESFT with a different number of SSB-spurions, η † . This proves that O 3F + cannot generate JJoperators with Higgs. Now, if current-current super-operators with H are not generated, those with Q cannot be generated either, since in the ESFT the SU (2) L vector does not distinguish between different SU (2) L -doublet chiral superfields. This completes the proof that O 3F + does not renormalize any JJ-operator in the basis of Table 1.
Concerning the non-renormalization of JJ-operators by loop-operators, the last new case left to discuss is that of O − by O F F . The SSB-spurion analysis forbids such renormalization in the ESFT and the result can be extended to the SM EFT as no superpartner-loop contributes either (see Eq. (40) in the Appendix).
At energies below the electroweak scale, we can integrate out W , Z, Higgs and top, and write an EFT with only light quarks and leptons, photon and gluons. This EFT contains four-fermion operators of type O 4f , generated at tree-level, that are JJ-operators, and other operators of dipole-type that are loop-operators. Following the above approach we can prove that these four-fermion operators cannot renormalize the dipole-type operators, and this is exactly what is found in explicit calculations [7].
Holomorphy of the anomalous dimensions
It has been recently shown in Ref. [10], based on explicit calculations, that the anomalous dimension matrix respects, to a large extent, holomorphy. Here we would like to show how to derive some of these properties using our ESFT approach. In particular, we will derive that, with the exception of one case, the one-loop anomalous dimensions of the complex Wilsoncoefficients c i = {c 3F + , c F F + , c D , c y , c yy , c ud R } do not depend on their complex-conjugates c * j : We start by showing when Eq. (34) is satisfied just by simple inspection of the SM diagrams. For example, it is easy to realize that holomorphy must be respected in contributions from dimension-six operators in which fermions with a given chirality, e.g., f α or f α f β , are kept as external legs; indeed, the corresponding Hermitian-conjugate operator can only contribute to operators with fermions in the opposite chirality. Interestingly, we can extend the same argument to operators with field-strengths if we write the loop-operators as where we have defined F αβ ≡ (F a µν t a σ µν ) αβ that transforms as a (1, 0) under the Lorentz group, and write the Hermitian-conjugate of Eq. (35) with Fαβ, a (0, 1) under the Lorentz group, as for example, it is clear that any diagram with an external F αβ respects holomorphy, as it can only generate the operators of Eq. (35) and not their Hermitian conjugates. One-loop contributions from O F F + in which H † t a t b H is kept among the external fields, however, do not necessarily respect holomorphy. An explicit calculation is needed, and while contributions to O F F + vanish by the reasoning given in [1], contributions to O y are found not to be holomorphic.
Following our previous supersymmetric approach, it is quite simple to check whether or not loop contributions are holomorphic. In the ESFT, holomorphy is trivially respected as super-operators with an η † -spurion renormalize among themselves and cannot induce the Hermitian-conjugate super-operators since those contain an η, and vice versa. This means that possible breakings of holomorphy, at the field-component level, must be the same in the ordinary SM loop and in its corresponding superpartner loop, as the total breaking must cancel in their sum. Therefore we can look at either one or the other loop to check holomorphy. In this way, we can always relate holomorphy to fermion chirality. For example, the breaking of holomorphy in the renormalization of O y from O † F F + [10], mentioned before, can be easily seen to arise from the diagram of Fig. 3. It corresponds to the superpartner one-loop contribution to O y arising from the vertex |H| 2 λ †σµ ∂ µ λ ∼ |H| 2 Hλ † ψ † H of Eq. (11), where we have used the EOM of λ (and replaced the U (1) φ and ψ by the SM Higgs and Higgsino).
Implications for the QCD Chiral Lagrangian
We can extend the above analysis also to the QCD Chiral Lagrangian [6]. At O(p 2 ), we have This is an operator that can be embedded in a D-term as d 4 θ U † U , where U and its superpartners are contained in U ≡ e iΦ , with Φ being a chiral superfield. At O(p 4 ), the QCD Chiral Lagrangian is usually parametrized by the L i coefficients [6] in a basis with operators that are linear combinations of JJ-operators and loop-operators. These are A more convenient basis is however where L JJ = L 9 /2 and L loop = L 9 + L 10 . It is easy to see that the first operator of Eq. (38) is a JJ-operator, while the second is a loop-operator. This latter can only be embedded in a θ 2 -term of a super-operator (i.e., U † W α R U W αL ), and therefore it cannot be renormalized by the operator in Eq. (36) in the supersymmetric limit. As contributions from superpartner loops can be easily shown to vanish, we can deduce that Eq. (36) cannot renormalize L loop at the one-loop level. This is indeed what one finds from the explicit calculation [6]: γ L loop = γ L 9 + γ L 10 = 1/4 − 1/4 = 0.
Conclusions
In EFTs with higher-dimensional operators the one-loop anomalous dimension matrix has plenty of vanishing entries apparently not forbidden by the symmetries of the theory. In this paper we have shown that the reason behind these zeros is the different Lorentz structure of the operators that does not allow them to mix at the one-loop level. We have proposed a way to understand the pattern underlying these zeros based on classifying the dimension-six operators in JJ-and loop-operators and also according to their embedding in super-operators Red-shaded area satisfies holomorphicity and is understood as consequence of Lorentz symmetry. (see Table 1 for the SM EFT). We have seen that all loop-operators break supersymmetry, 10 while we have two classes of JJ-operators, those that can be supersymetrized and those that cannot. This classification is very useful to obtain non-renormalization results based in a pure SSB-spurion analysis in superfields, that can be extended to non-supersymmetric EFTs. In terms of component fields, the crucial point is that the vanishing of the anomalousdimensions does not arise from cancellations between bosons and fermions but from the underlying Lorentz structure of the operators.
We have presented how this approach works in a simple U (1) model with a scalar and fermions, and have explained how to extend this to SM EFTs and the QCD Chiral Langrangian. The main results are summarized in Fig. 4 that shows which entries of the anomalous-dimension matrix for the SM EFTs operators we have proved to vanish. We have also explained how to check if holomorphy is respected by the complex Wilson-coefficients, a property that is fulfilled in most cases, as Fig. 4 shows. Our approach can be generalized to other theories as well as to the analysis of other anomalous dimensions, a work that we leave for a further publication.
For the non-Abelian case, there is also the loop-super-operator | 10,144.4 | 2014-12-22T00:00:00.000 | [
"Physics"
] |
Measurement of the single-top-quark t-channel cross section in pp collisions at sqrt(s) = 7 TeV
A measurement of the single-top-quark t-channel production cross section in pp collisions at sqrt(s) = 7 TeV with the CMS detector at the LHC is presented. Two different and complementary approaches have been followed. The first approach exploits the distributions of the pseudorapidity of the recoil jet and reconstructed top-quark mass using background estimates determined from control samples in data. The second approach is based on multivariate analysis techniques that probe the compatibility of the candidate events with the signal. Data have been collected for the muon and electron final states, corresponding to integrated luminosities of 1.17 and 1.56 inverse femtobarns, respectively. The single-top-quark production cross section in the t-channel is measured to be 67.2 +/- 6.1 pb, in agreement with the approximate next-to-next-to-leading-order standard model prediction. Using the standard model electroweak couplings, the CKM matrix element abs(V[tb]]) is measured to be 1.020 +/- 0.046 (meas.) +/- 0.017 (theor.).
Introduction
Single top quarks can be produced through charged-current electroweak interactions. Due to the large top-quark mass, these processes are well suited to test the predictions of the standard model (SM) of particle physics and to search for new phenomena. Measurements of the singletop-quark production cross section also provide an unbiased determination of the magnitude of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element |V tb |.
Single-top-quark production was observed in proton-antiproton collisions at the Tevatron collider with a centre-of-mass energy of 1.96 TeV [1-3]. The cross section increases by a factor of 20 at the Large Hadron Collider (LHC) with respect to the Tevatron. The first measurements of the single-top-quark production cross section in proton-proton collisions at a centre-of-mass energy of 7 TeV were performed by the Compact Muon Solenoid (CMS) [4] and ATLAS [5,6] experiments.
Previous measurements are compatible with expectations based on approximate next-to-leadingorder and next-to-next-to-leading logarithm (NLO+NNLL) perturbative quantum chromodynamics (QCD) calculations. In these calculations, three types of parton scattering processes are considered: t-channel and s-channel processes, and W-associated single-top-quark production (tW). The dominant contribution to the cross section is expected to be from the t-channel process with a cross section of σ th t-ch. = 64.6 +2.1 −0.7 +1.5 −1.7 pb [7] for a top-quark mass of m t = 172.5 GeV/c 2 .
This paper extends the previous CMS measurement [4] of the t-channel cross section. The single-top-quark production cross section measurement is based on pp collision data at √ s = 7 TeV collected during 2011 with the CMS experiment, corresponding to integrated luminosities of 1.17 and 1.56 fb −1 with muon and electron final states, respectively. Events with leptonically decaying W bosons are selected: t → bW → b ν ( = e or µ). This measurement is used to determine the CKM matrix element |V tb |.
The t-channel event signature (Fig. 1) typically comprises one forward jet scattered off a top quark. The decay products of the top quark mainly appear in the central region of the detector. A dedicated event selection is applied, and then measurements with two complementary approaches are performed. The first approach exploits the reconstructed top-quark mass and one of the angular properties specific to t-channel top-quark production: the forward pseudorapidity distribution of the light jet recoiling against the top quark. This analysis is referred to as the |η j | analysis. It is straightforward and robust and has little model dependence. The second approach exploits, via multivariate discriminators, the compatibility of the signal candidates with the event characteristics predicted by the SM for electroweak top-quark production. This approach aims for a precise t-channel cross section measurement by optimising the All generated events undergo a full simulation of the detector response according to the CMS implementation of GEANT4 [19], and are processed by the same reconstruction software used for collision data.
Event Selection and Reconstruction
Events are characterized by a single isolated muon or electron and momentum imbalance due to the presence of a neutrino, with one central b jet from the top-quark decay. An additional light-quark jet from the hard-scattering process is often present in the forward direction. A second b jet produced in association with the top quark can also be present (Fig. 1, right), although it yields a softer p T spectrum with respect to the b jet coming from the top-quark decay.
The trigger used for the online selection of the analysed data for the muon channel is based on the presence of at least one isolated muon with a transverse momentum p T > 17 GeV/c. For the electron channel, an isolated electron trigger with a transverse momentum p T > 27 GeV/c was used for the initial data-taking period, corresponding to an integrated luminosity of 216 pb −1 . For the remaining data-taking period, a trigger selecting at least one electron with p T > 25 GeV/c and a jet with p T > 30 GeV/c was used. The jet is identified in the trigger processing as coming from a fragmentation of a b quark using the Track Counting High-Efficiency (TCHE) b-tagging algorithm described in Ref. [20].
At least one primary vertex is required, reconstructed from a minimum of four tracks, with a longitudinal distance of less than 24 cm, and a radial distance of less than 2 cm from the nominal interaction point. To select good muon and electron candidates, the same lepton identification as described in Ref. [21] is applied. Electrons, muons, photons, and charged and neutral hadron candidates are reconstructed and identified using the CMS particle-flow (PF) algorithm [22]. The missing transverse momentum vector p T / is reconstructed from the momentum imbalance of PF particle candidates in the plane transverse to the beams direction. The magnitude of p T / is the missing transverse energy E / T . The presence of exactly one isolated muon or electron candidate originating from the primary vertex is required in the event. Muon candidates are selected by requiring a transverse momentum p T > 20 GeV/c and a pseudorapidity |η| < 2.1. Electron candidates must have a transverse momentum p T > 30 GeV/c and |η| < 2.5. Lepton isolation I rel is defined as the sum of the transverse energy deposited by stable charged hadrons, neutral hadrons, and photons in a cone of ∆R = (∆η) 2 + (∆φ) 2 = 0.4 around the charged-lepton track, divided by the transverse momentum of the lepton. Muon isolation is ensured by requiring I rel < 0.15, while the isolation requirement is tightened to 0.125 for electrons. An electron candidate is rejected if it is identified as originating from the conversion of a photon into an electron-positron pair or if it fails the identification criteria described in Ref. [21]. Events are also rejected if an additional muon candidate is present that passes looser quality criteria, namely, p T > 10 GeV/c, |η| < 2.5, and I rel < 0.2. For additional electrons, the required transverse momentum is p T > 15 GeV/c. Jets are defined by clustering PF candidates according to the anti-k T algorithm [23] with a distance parameter of 0.5. The analysis considers jets whose calibrated transverse momentum is greater than 30 GeV/c for |η| < 4.5. An event is accepted for further analysis only if at least two such jets are reconstructed. A jet is identified as coming from a b-quark fragmentation if it passes a tight threshold on the Track Counting High-Purity (TCHP) b-tagging algorithm [20] corresponding to a misidentification probability of 0.1%. The difference between simulated and measured b-tagging efficiencies, for true and misidentified b jets, is corrected by scaling the simulated event yields according to p T -dependent scale factors determined from control samples from the data [20].
Events with a muon that is not from a leptonic decay of a W boson are suppressed by requiring a reconstructed transverse W boson mass m T = 2p T E / T (1 − cos(∆φ , p T / )) > 40 GeV/c 2 , where ∆φ , p T / is the azimuthal angle between the muon and the p T / directions and p T is the transverse momentum of the muon. For the electron channel, where the QCD multijet contamination is larger, the requirement E / T > 35 GeV is applied instead of the m T selection.
To classify signal and control samples, different event categories are defined and denoted as "n-jets m-btags", where n is the number of selected jets (2, 3, or 4) and m is the number of selected b-tagged jets (0, 1, or ≥ 2). The single-top-quark t-channel signal is primarily contained in the category "2-jets 1-btag", followed by "3-jets 1-btag", as the second b jet, which is produced in association with the top quark, is mostly out of acceptance. The other categories are dominated by background processes with different compositions. In particular, the "2/3-jets 0-btags" categories are enriched in events with a W boson produced in association with light partons (u, d, s, g). The "3-jets 2-btags" and "4-jets 0/1/2-btags" categories are enriched in tt events.
The reconstruction of the top quark from its decay products leads to multiple choices of possible top-quark candidates. In the first step, the W-boson candidate is reconstructed from the charged lepton and from p T / following the procedure described in Ref.
[4]. In the second step, the top-quark candidate is reconstructed by combining the W-boson candidate with a jet identified as coming from a b quark, and its mass m νb is calculated. Depending on the analysis category, the ambiguity in the choice of the b-quark jet from the top-quark decay and the recoiling light quark has to be resolved. Events in the "2-jets 1-btag" category have no ambiguity: the b-tagged jet is associated with the top-quark decay, and the other jet is considered to be a light-quark jet.
In the other categories, the top quark and light-quark jet reconstruction has been optimized for the purpose of each analysis and differs among them. In the NN and BDT analyses, the most forward jet is chosen to be the light-quark jet in categories where two jets and zero or two b tags are required. The other jet is associated with the top-quark decay. In categories where three or more jets are required, in the case of one or two required b tags, the jet with the lowest value of the TCHP discriminator is assumed to originate from the light quark. If no b tag is required, the most forward jet is associated with the light quark. From the remaining jets, the one which together with the reconstructed W boson has a reconstructed top-quark mass m νb closest to m t = 172.5 GeV/c 2 is chosen as the b jet coming from the top-quark decay. In the |η j | analysis, the jet with the highest value of the TCHP discriminator is used for the topquark reconstruction in the "2-jets 0-btags" and "3-jets 2-btags" categories. The inclusive |η| distribution of both jets is used in "2-jets 0-btags", while for the "3-jets 2-btags" category the |η| of the non-b-tagged jet is used.
In the |η j | analysis the invariant mass m νb of the reconstructed top quark is used to further divide the "2-jets 1-btag" category into a t-channel enriched signal region (SR), defined by selecting events within the mass range 130 < m νb < 220 GeV/c 2 , and a W boson and tt enriched sideband region (SB), defined by selecting events that are outside this m νb mass window. The event yield in the SR is summarised in Table 1 for the muon and electron channels, together with expectations from simulated signal and backgrounds, and for QCD multijet events, which are determined from control samples of data, as described in Section 5.1. Table 1: Event yield with statistical uncertainties of the |η j | analysis for the signal and main background processes in the signal region, after applying the m νb mass requirement for the µ and e channels. The yields are taken from simulation except for the QCD multijet yield, which is obtained from control samples of data as described in Section 5.1. The normalisation of the Wc(c) and Wb(b) processes is further discussed in Section 5.2.
Process
Muon
Background Estimation and Control Samples
Several control data samples are used for two main purposes: • to check the distributions of variables used as inputs to the analyses and the agreement between data and simulation; • to determine from data the yields and distributions of variables of interest for the main background processes.
All the analyses use rate and shape determinations from QCD multijet background data. The |η j | analysis also determines the yields and distributions of the background processes by using W boson production in association with jets from light quarks as well as c and b quarks. The NN and BDT analyses take the shape of the W+jets background from MC simulation, but consider the impact of shape uncertainties in the evaluation of the systematic uncertainties. In all three analyses the rate of the W+jets background is determined "in situ" by the signal extraction method as described in Section 9.
QCD Multijet Background Estimation
The yield of the QCD multijet background in the different categories is measured by performing fits to the distributions of the transverse W boson mass m T in the muon channel and E / T in 6 5 Background Estimation and Control Samples the electron channel. A maximum-likelihood fit to the distribution of m T or E / T is performed assuming the following parameterisation: , where x is m T or E / T for the muon and electron channels, respectively, and S(x) and B(x) are the expected distributions for the sum of all processes including a W boson in the final state and for QCD multijet events, respectively. The function S(x) is taken from simulation, while B(x) is extracted directly from data. The parameters a and b are determined from the fit. The QCD multijet background yield is estimated to be the area of the fitted curve b · B(x) in the range m T > 40 GeV/c 2 for the muon channel and E / T > 35 GeV for the electron channel, as mentioned in Section 4.
QCD multijet enriched samples from data are used to model the distributions B(x) of m T and E / T . For the muon channel, this sample is selected by inverting the isolation requirement. For the electron channel, the selected electron is required to fail at least two of the three following quality requirements: I rel < 0.1, the distance of closest approach to the primary vertex on the x-y plane δ xy < 0.02 cm, and the electron identification criteria given in Ref. [21]. It was verified by simulation that the m T and E / T distributions for QCD multijet like events are not significantly affected by this altered event selection.
In the |η j | analysis the fits to the m T and E / T distributions cannot be performed reliably in the SR and SB separately due to the limited size of the simulated samples, which would introduce large uncertainties in the signal modelling. For this reason, the fit is performed on the entire "2-jets 1-btag" sample. The number of QCD multijet events in the SB and SR regions is determined by scaling the total QCD multijet yield, obtained from the fit, by the fraction of events in the two regions (SB and SR) of the m νb distribution, as determined from the QCD multijet enriched region. In the simulation, the distributions of the relevant variables obtained from the QCD multijet enriched sample are consistent with the ones in the SR. The QCD multijet yields restricted to the SR are reported in Table 1.
For all three analyses, the relative uncertainties on the QCD multijet yield estimates are taken to be ±50% for the muon channel and ±100% for the electron channel. Several cross checks have been performed; for example, the same fits have been repeated taking B(x) from simulation for both channels. Moreover, the choice of x = m T or E / T has been inverted for the muon and electron channels, in this way performing fits on m T for the electron channel and E / T for the muon channel. The results in each case are in agreement with the previous estimate within the assumed uncertainties.
W+jets Background Estimation and Other Control Samples
A check of the modelling for the W+jets background is carried out for each of the three analyes in the 0-tag control regions. In particular, the "2-jets 0-btags" category is highly enriched in W+light jet events. The modelling of tt background is checked in the "3-jets 2-btags" as well as the "4-jets" categories. In general, the event yields are reasonably well reproduced by the simulation within the systematic uncertainties. The shapes of the relevant variables, |η j | and m νb , and the input variables of the NN and BDT analyses show good agreement between data and simulation. Table 1 shows a difference between the total observed and expected yields for the |η j | analysis. This difference can be attributed to excesses in data for the Wb+X and Wc+X processes. The ATLAS collaboration reported in Ref. [24] that the fiducial W+b-jet cross section in the lepton and one or two jets final state is a factor of 2.1 larger than the NLO prediction, but is still consistent at the level of 1.5 standard deviations with this SM prediction.
Motivated by these observed excesses in comparison to the SM NLO calculations, the |η j | anal-ysis determines the W+jets background yield and |η j | distribution from data. The |η j | distribution for W+jets process is extracted from the SB by subtracting the |η j | distribution of all other processes from the data. The event yield and |η j | distributions used for these subtractions are taken from simulations of tt, single-top-quark sand tW-channels, and diboson production. The QCD multijet event yield and |η j | distribution are extracted from data and extrapolated to the SB as described in Section 5.1. The |η j | distribution for W+jets processes in the SB is therefore used in the SR for the signal extraction procedure (see Section 6), assuming that the shapes in the SB and SR are compatible with each other. For the muon channel, the compatibility of the distributions in the two regions has been verified through a Kolmogorov-Smirnov compatibility test, yielding a p-value of 0.47, and a χ 2 test, yielding a p-value of 0.63. For the electron channel, the Kolmogorov-Smirnov compatibility test has a p-value of 0.51 and the χ 2 test a p-value of 0.60. The stability of the extracted shape has been tested by varying the sample composition in terms of tt and signal fractions by 20% and 100%, respectively. The extracted shapes are compatible with a p-value greater than 0.9 in both cases.
The |η j | Analysis
The signal yield is extracted using a maximum-likelihood fit to the observed distribution of |η j |. The signal distribution for the fit is taken from simulation. The W/Z+jets component of the background is normalised to the value obtained from the extraction procedure described in Section 5.2, and then added to the diboson processes, resulting in the electroweak component of the background for the fit. The signal and the electroweak components are unconstrained in the fit, whereas the QCD multijet component is fixed to the result determined in Section 5.1. A Gaussian constraint is applied to tt and other top-quark backgrounds. Figure 2 shows the distribution of |η j | obtained from the fit. Figure 3 shows the distribution of the reconstructed top-quark mass m νb normalised to the fit results, restricting to the highly enriched region of single-top-quark events for |η j | > 2.8.
Single-top-quark t-channel production at the LHC is expected to be characterised by two features. First, the top-quark cross section is about a factor two larger than the top-anti-quark cross section [7]. This can be experimentally accessed via the charge of the muon or electron. Second, top quarks are almost 100% polarised with respect to a certain spin axis due to the V-A nature of the couplings. This can be studied via the cos θ * distribution [25], where θ * is defined as the angle between the charged lepton and the non-b-tagged jet, in the reconstructed top-quark rest Figure 4: Distinct single-top-quark t-channel features in the SR for |η j | > 2.8, for the electron and muon final states combined. The charge of the lepton (left) and cos θ * (right). All processes are normalised to the fit results. Because of limited simulated data, the background distribution is smoothed by using a simple spline curve (right).
frame. The observed charge asymmetry and the cos θ * distribution are presented in Fig. 4 for muon plus electron events in the SR, for |η j | > 2.8.
Neural Network Analysis
In the NN analysis, several kinematic variables, which are characteristic of SM single-top-quark production, are combined into a single discriminant by applying an NN technique. The NEU-ROBAYES package [26,27] used for this NN analysis combines a three-layer feed-forward NN with a complex, but robust, preprocessing. To reduce the influence of long tails in distributions, input variables are transformed to be Gaussian distributed. In addition, a diagonalisation and rotation transformation is performed such that the covariance matrix of the transformed variables becomes a unit matrix. To obtain good performance and to avoid overtraining, the NN uses Bayesian regularisation techniques for the training process. The network input layer consists of one input node for each input variable plus one bias node. The hidden layer is adapted to this particular analysis and consists of one more node than the input layer. The output node gives a continuous discriminator output in the interval [−1, 1]. For the training of the NN, after applying the full event selection as described in Section 4, simulated samples of signal t-channel single-top-quark events and background events from tt, W+jets, and Z+jets samples are used. The ratio of signal to background events in the training is chosen to be 50:50 and the background processes are weighted according to the SM prediction, as outlined in Section 5. The NN is trained such that t-channel single-top-quark events tend to have discriminator values close to 1, while background events tend to have discriminator values near −1. Because of their different event selections, separate neural networks are trained for muon and electron events.
During the preprocessing, the training variables are ranked by the significance of their correlation to the target discriminator output. The correlation matrix of all training variables and the target value is calculated. The variable with the smallest correlation to the target is removed and the loss of correlation is calculated. This is repeated until the correlation of all variables with the target is determined. The significance of a variable is calculated by dividing the loss of correlation with the target by the square root of the sample size. In order to select variables which contain information that is not already incorporated by other variables, a selection criterion of ≥ 3 σ on the significance has been chosen. A set of 37 variables remains for the muon channel when applying this selection criterion. For the electron channel, a set of 38 variables remains. The validity of the description of these input variables and the output of the NN discriminant is confirmed in data with negligible signal contribution. Furthermore, it is verified with a bootstrapping technique [28] that the bias of the cross section measurement due to a possible overtraining of the NN is negligible. The variables with the highest ranking in both networks are |η j |, m T , the invariant mass of the two leading jets, and the total transverse energy of the event.
The distributions of the NN discriminator in the "2-jets 1-btag" and "3-jets 1-btag" signal categories are shown in Fig. 5 for the muon channel and in Fig. 6 for the electron channel. For the remaining categories "2-jets 2-btags", "3-jets 2-btags", "4-jets 1-btag", and "4-jets 2-btags", the events are combined into one plot in Fig. 7 for the muon channel on the left and for the electron channel on the right.
Boosted Decision Trees Method
The BDT method was previously used in the first CMS measurement of the t-channel cross section [4]. The current analysis uses a significantly larger data sample. It has been optimised Electron, "2-jets 2-btags", "3-jets 2-btags", "4-jets 1-btag", "4-jets 2-btags" Figure 7: Distributions of the NN discriminator output in the background dominated region. All events from the signal depleted categories "2-jets 2-btags", "3-jets 2-btags", "4-jets 1-btag", and "4-jets 2-btags" are combined for the muon channel (left) and the electron channel (right). Simulated signal and background contributions are scaled to the best fit results. using a "blind" analysis with the optimisation based exclusively on regions where the signal contribution is negligible and all the selections frozen before the signal region has been looked at. It further provides an increase of the measurement sensitivity and a reduction of systematic uncertainties. The BDT analysis was designed following Ref. [29]. The adopted BDT algorithm constructs 400 decision trees using the Adaptive Boosting algorithm as implemented in Ref. [30]. The BDT training is carried out separately for the electron and muon final states, for each of the single-top-quark "2-jets 1-btag" and "3-jets 1-btag" signalenriched regions, to provide a total of four BDTs. The background processes are input to the training according to their theoretical cross section predictions as outlined in Section 5. One third of the simulated signal and background samples are used for the training, one third are used to verify the performance of the trained BDT, while the remaining simulated data provide an unbiased sample used for the cross section evaluation.
Eleven observables reconstructed in the detector are chosen based on their power to discriminate between signal and background events. The adopted variables are the lepton transverse momentum; the pseudorapidities of the most forward non-b-tagged jet and of the jet with the highest transverse momentum; the invariant mass of all reconstructed jets in the event; the angular separation between the two leading jets; the sums of the hadronic energy and of the hadronic transverse energy; the reconstructed top-quark mass using the jet with the highest b-tag discriminator; the reconstructed top-quark mass using the jet giving the reconstructed top-quark mass closest to 172 GeV/c 2 ; the cosine of the angle between the reconstructed W boson, in the rest frame of the sum of four-vectors of the W boson and leading jet, and the sum of four-vectors of the W boson and leading jet; and the sphericity of the event. The validity of the description of these input variables and the output of the BDT classifiers is confirmed in data with negligible signal contribution using a Kolmogorov-Smirnov test.
The QCD multijet background evaluation is described in Section 5. The determination of the single-top-quark production cross section in the t-channel, including the treatment of statistical and systematic uncertainties, is performed by using the classifier distributions in all 12 analysis categories simultaneously.
Determination of the Cross Section with Multivariate Analyses
The NN and BDT analyses employ a Bayesian approach [31] to measure the single-top-quark production cross section. The signal cross section is determined simultaneously from the data distributions of the corresponding multivariate discriminator, modelled as 1-dimensional histograms, in the different categories. The distributions for signal and backgrounds are taken from simulation, except for the QCD multijet distribution, which is derived from data. The signal yield is measured in terms of the signal strength µ which is defined as the actual cross section divided by the SM prediction. The probability to observe a certain dataset p(data|µ) is related to the posterior distribution p(µ|data) of µ by using Bayes' theorem p(µ|data) ∝ p(data|µ) · π(µ) where π(µ) denotes a uniform prior distribution of the signal strength.
Experimental uncertainties and cross-section uncertainties are included by introducing additional parameters θ in the statistical model, extending the probability to observe a certain dataset to p (data|µ, θ), which depends on the nuisance parameters θ. This approach allows the nuisance parameter values to be constrained by data, thus reducing the respective uncertainties.
The m systematic uncertainties are included via nuisance parameters θ = (θ 1 , · · · , θ m ), which contain k normalisation parameters and m − k parameters influencing the shape of the simulated distributions. For normalisation uncertainties i = 1, · · · , k, the priors π i (θ i ) are lognormal distributions. The medians are set to the corresponding cross sections and the widths to their uncertainties. The priors for shape uncertainties are normal distributions. For each shape uncertainty, two shifted simulated discriminator distributions are derived, varying the corresponding uncertainty by ±1σ. Here, the parameter θ i is used to interpolate between the nominal and shifted histograms.
The posterior distribution for the signal strength µ, p(µ|data), is obtained by integrating the (m + 1)-dimensional posterior in all nuisance parameters θ: This integration, also called marginalisation, is performed using the Markov chain MC method as implemented in the package THETA [32]. The central value is extracted from this distribution and used as µ. The central 68% quantile is taken as the total marginalised uncertainty in the measurement. It includes the statistical uncertainty and the m systematic uncertainties.
An estimate of the statistical uncertainty is taken from the total marginalised uncertainty after subtracting, in quadrature, the individual contributions of marginalised uncertainties. To obtain the individual contribution of each systematic uncertainty, the signal extraction procedure is repeated with signal and background contributions changed according to the systematic uncertainty. The mean shift of the cross section estimate, with respect to the value obtained in the nominal scenario, is taken as the corresponding impact on the signal cross section measurement.
However, modelling the systematic uncertainties as nuisance parameters requires an assumption for the dependence of p on θ i . This dependence is often unknown for theoretical uncertainties, in particular if they have an effect on the shape of the discriminator distribution. Therefore, such uncertainties are not included via additional nuisance parameters. Instead, their effect on the cross section measurement is estimated by performing pseudo-experiments as explained above, and their impact is added in quadrature to the total marginalised uncertainty.
In conclusion, systematic uncertainties are included with two different methods. Experimental uncertainties and cross-section uncertainties are included as additional parameters in the statistical model and marginalised. Theoretical uncertainties are not included as additional parameters in the statistical model, but their impact is added in quadrature to the total marginalised uncertainty.
Systematic Uncertainties and Measurement Sensitivity
For the |η j | analysis, each systematic uncertainty is evaluated by generating pseudo-experiments, which take into account the effect of the corresponding systematic source on the distribution of |η j | and on the event yield of the physics processes. Pseudo-experiments are generated separately with templates varied by ±1 σ of the corresponding uncertainty. A fit to |η j | is then performed on each pseudo-experiment. The mean shift of the fit results, with respect to the value obtained in the nominal scenario, is taken as the corresponding uncertainty.
In the BDT and NN analyses the experimental systematic uncertainties (excluding the luminosity) are marginalised with the Bayesian method. Theoretical uncertainties, however, are not marginalised, but are estimated by generating pseudo-experiments using separate templates, varied by ±1 σ for the corresponding uncertainty, for each source of systematic uncertainties, and repeating the signal extraction procedure.
A particular uncertainty, which is only present for the |η j | analysis, concerns the extraction of W+jets from data. It is evaluated by generating pseudo-experiments in the SB and repeating the signal extraction procedure and fit to |η j |. This method exploits the ansatz that the distribution of |η j | is the same in both the SR and SB. The uncertainty is taken as the root mean square of the distribution of fit results obtained in this way. This uncertainty depends on the amount of available data in the SB and is uncorrelated between the muons and electron samples. In addition, alternative |η j | shapes are derived in the simulation by varying the Wb+X and Wc+X fractions of the background by factors of ±30% independently in the SR and SB regions. The fit procedure is repeated using the new shapes and the maximum difference in the result with respect to the central value is added in quadrature to the other uncertainties.
The following sources of systematic uncertainties are considered in all three analyses. Differences in the |η j | and the two multivariate analyses are remarked upon where relevant: • Jet energy scale (JES). All reconstructed jet four-momenta in simulated events are simultaneously varied according to the η and p T -dependent uncertainties on the jet energy scale [33]. This variation in jet four-momenta is also propagated to E / T . For multivariate analyses, the complete parameterisation of JES as in [33] is considered and all parameters are considered as nuisance parameters and included in the marginalisation procedure described in Section 9.
• Jet energy resolution. A smearing is applied to account for the known difference in jet energy resolution with respect to data [34], increasing or decreasing the extra resolution contribution by the uncertainty on the resolution.
• b tagging. Both b tagging and misidentification efficiencies in the data are estimated from control samples [20]. Scale factors are applied to simulated samples to reproduce the measured efficiencies. The corresponding uncertainties are propagated as systematic uncertainties. For multivariate analyses, b tagging average scale factors are constrained using the marginalisation procedure described in Section 9. The effect of any remaining, unconstrained, b tagging modelling is determined by modelling possible variations of the b tagging scale factors as a function of p T and |η|, using different degrees (from one to five) of Chebyshev polynomials. The largest observed variation with respect to the nominal result is taken as the systematic uncertainty, and is found to be negligible.
• Trigger. Single lepton trigger efficiencies are estimated with a "tag and probe" method [35] from Drell-Yan data. The efficiencies of triggers requiring a lepton plus a b-tagged jet are parameterised as a function of the jet p T and the value of the TCHP b-tag discriminator. The selection efficiencies have been validated using a reference trigger. The uncertainties of the parameterisation and an additional flavour dependency is propagated to the final result.
• Pileup. The effect of multiple interactions (pileup) is evaluated by reweighting simulated samples to reproduce the expected number of pileup interactions in data, properly taking into account in-time and out-of-time pileup contributions. The uncertainty on the expected number of pileup interactions (5%) is propagated as a systematic uncertainty to this measurement.
• Missing transverse energy. The E / T modelling uncertainty is propagated to the cross section measurement. The effect on E / T measurement of unclustered energy deposits in the calorimeters is included.
• Background normalisation. The uncertainties on the normalisation of each background source are listed below. They are propagated as systematic uncertainties in the |η j | analysis only for dibosons and sand tW-channel single-top-quark processes. The remaining backgrounds are estimated from data. The uncertainty on tt is used as a Gaussian constraint in the signal-extraction fit. In the multivariate analyses, normalisation uncertainties are accounted for as a prior probability density function for the Bayesian inference using a log-normal model.
• Dibosons, single-top-quark sand tW-channels: ±30%, ±15%, ±13%, respectively, based on theoretical uncertainties. • W/Z+jets: ±100%, ±50%, and ±30% are taken for W+b/c flavour jets, W+light flavour jets, and Z+jets, respectively, consistent with previous estimates [37]. In the multivariate analyses, the various W+jets processes are considered to be uncorrelated, as are the different jet categories, in order to avoid too many model assumptions. • QCD multijet: the normalisation and the corresponding uncertainty is determined from data (see Section 5).
• Limited MC data. The uncertainty due to the limited amount of MC data in the templates used for the statistical inferences is determined by using the Barlow-Beeston method [38,39].
• Scale uncertainty. The uncertainties on the renormalisation and factorisation scales are studied with dedicated single-top-quark and background samples of W+jets, Z+jets, and tt events. They are generated by doubling or halving the renormalisation and factorisation scale with respect to the nominal value equal to the Q 2 in the hardscattering process.
• Extra parton modelling (matching). The uncertainty due to extra hard parton radiation is studied by doubling or halving the threshold for the MLM jet matching scheme [40] for W+jets, Z+jets, and tt from its default.
• Signal generator. The results obtained by using the nominal POWHEG signal samples are compared with the result obtained using signal samples generated by COMPHEP. In general, the largest model deviations occur in the kinematic distributions of the spectator b quark [41]. The differences in the transverse momentum distribution of the spectator b quark, at the generator level, between 4-flavour and 5-flavour scheme (FS) POWHEG [42] are more than a factor two smaller than the differences between POWHEG-5FS and COMPHEP over the whole p T range. Pseudoexperiments with simulated COMPHEP events are generated and the nominal signal extraction procedure with POWHEG-5FS templates is repeated. Half of the observed shift of the cross section measurement is taken as the systematic uncertainty due to • Parton distribution functions. The uncertainty due to the choice of the parton distribution functions (PDF) is estimated using pseudo-experiments, reweighting the simulated events with each of the 40 eigenvectors of the CTEQ6 [18] PDF set and the central set of CTEQ10 [43], and repeating the nominal signal extraction procedure. For reweighting the simulated events, the LHAPDF [44] package is used.
The jet energy scale and jet energy resolution are fully correlated across all samples. The matching and scale uncertainties are fully correlated between W+jets and Z+jets, but are uncorrelated with tt. Table 2 summarises the different contributions to the systematic uncertainty on the combined (muon and electron) cross section measurement in the three analyses. For the multivariate analyses, experimental uncertainties and background rates are constrained from the marginalisation procedure (except for the luminosity), as described in Section 9. The remaining theoretical and luminosity uncertainties are added separately in quadrature to the total uncertainty.
The two measurements are compatible, taking into account correlated and uncorrelated uncertainties. The uncorrelated uncertainties include the W+jets and QCD extraction procedure, lepton reconstruction and trigger efficiencies, and the hadronic part of the trigger.
The Bayesian inference is performed with the data samples and the 50% quantile is calculated as the best parameter estimate for the signal strength µ. The 84% and 16% quantiles are quoted as upper and lower boundaries for the 1σ credible interval.
The measured single-top-quark t-channel production cross section in the BDT analysis is σ t-ch. = 66.6 +7.0 −6.6 (stat. + syst. + lum.) +6.4 The results of the three analyses are consistent with the SM prediction.
Combination
The results of the three analyses are combined using the BLUE method.The statistical correlation between each pair of measurements is estimated by generating dedicated pseudoexperiments. The correlation is 60% between NN and |η j |, 69% between BDT and |η j |, and 74% between NN and BDT. Correlations for the jet energy scale and resolution, b tagging, and E / T modelling between |η j | and the two multivariate analyses are expected to be small. This is because the determination of the corresponding nuisance parameters, from the marginalisation adopted in the BDT and NN analyses, is dominated by in-situ constraints from data samples independent of those used to determine uncertainties in the |η j | analysis. The assumed correlation for those uncertainties is taken to be 20%. The correlation has, nevertheless, been varied from 0% to 50%, with a corresponding variation of the central value by −0.03 pb, and no appreciable variation has been observed for the combined uncertainty. For trigger uncertainties, the correlation between |η j | and the two multivariate analyses is more difficult to ascertain. Varying the correlations in the combination from 0% to 100% results in a variation of the central value of 0.03 pb, with no appreciable variation of the combined uncertainty. All other uncertainties are determined mostly from the same data samples used by the two analyses, hence 100% correlation is assumed.
The BLUE method is applied iteratively, as previously carried out in Ref.
[4]. In each iteration, the absolute uncertainty is calculated by scaling the relative uncertainties given in Table 2 with the combined value from the previous iteration. This is repeated until the combined value remains constant. There are no appreciable changes with respect to the non-iterative BLUE method. The 0.03 pb variation in the central value, due to changes in correlation coefficients, is added in quadrature to the total uncertainty. However, this results in a negligible additional contribution.
The χ 2 obtained by the BLUE combination of the three analyses is 0.19, corresponding to a p-value of 0.90. The results of the individual analyses are consistent with each other.
|V tb | Extraction
The absolute value of the CKM element |V tb | is determined in a similar fashion to Ref.
is the SM prediction calculated assuming |V tb | = 1 [7]. If we take into account the possible presence of an anomalous Wtb coupling, this relation modifies taking into account an anomalous form factor f L V [45][46][47], which is not necessarily equal to 1 in beyond-the-SM models. We determine = 1.020 ± 0.046 (meas.) ± 0.017 (theor.) , where the first uncertainty term contains all uncertainties of the cross section measurement including theoretical ones, and the second is the uncertainty on the SM theoretical prediction. From this result, the confidence interval for |V tb |, assuming the constraint |V tb | ≤ 1 and f L V = 1, is determined using the unified approach of Feldman and Cousins [48] to be 0.92 < |V tb | ≤ 1, at the 95% confidence level.
Conclusions
The cross section of t-channel single-top-quark production has been measured in pp collisions using 2011 data in semileptonic top-quark decay modes with improved precision compared to the previous CMS measurement. Two approaches have been adopted. One approach has been based on a fit of the characteristic pseudorapidity distribution of the light quark recoiling against the single top quark in the t-channel with background determination from data. The other has been based on two multivariate discriminators, a Neural Network and Boosted Decision Trees. The multivariate analyses reduce the impact of systematic uncertainties by simultaneously analysing phase space regions with substantial t-channel single-top-quark contributions, and regions where they are negligible. The results are all consistent within uncertainties.
As a consequence, all three analyses have been combined with the Best Linear Unbiased Estimator method to obtain the final result.
The combined measurement of the single-top-quark t-channel cross section is 67.2 ± 6.1 pb. This is the first measurement with a relative uncertainty below 10%. It is in agreement with the approximate NNLO standard model prediction of 64.6 +2.1 −0.7 +1.5 −1.7 pb [7]. Figure 11 compares this measurement with dedicated t-channel cross section measurements at the Tevatron [49,50], ATLAS [5], and with the QCD expectations computed at NLO with MCFM in the 5-flavour scheme [51] and at approximate NNLO [7]. The absolute value of the CKM matrix element V tb is measured to be | f L V V tb | = σ t-ch. /σ th t-ch. = 1.020 ± 0.046 (meas.) ± 0.017 (theor.). Assuming f L V = 1 and |V tb | ≤ 1, we measure the 95% confidence level interval 0.92 < |V tb | ≤ 1. Figure 11: The single-top-quark cross section in the t-channel vs. centre-of-mass energy. The error band (width of the curve) of the SM calculation is obtained by varying the top-quark mass within its current uncertainty [3], estimating the PDF uncertainty according to HEPDATA recommendations [52], and varying the factorisation and renormalisation scales coherently by a factor of two up and down. The central values of the two SM predictions differ by 2.2% at √ s = 7 TeV.
Acknowledgements
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank the technical and administrative staff at CERN and other CMS institutes, and acknowledge support from BMWF and FWF ( [2] D0 Collaboration, "Measurements of single top quark production cross sections and |V tb | in pp collisions at √ s = 1.96 TeV", Phys. Rev. D 84 (2011) 112001, doi:10.1103/PhysRevD.84.112001, arXiv:1108.3091. | 10,446.8 | 2012-09-20T00:00:00.000 | [
"Physics"
] |
(2687) Proposal to conserve the name Phyllopsora against Triclinum and Crocynia (Ramalinaceae, lichenized Ascomycota)
Tyrrhenian waters (Montresor & al. in Graneli & al., Toxic Mar. Phytoplankt. 1990). The nomenclatural consequences of our study are substantial, as Alexandrium currently is a later taxonomic synonym of Blepharocysta which has priority. Following the guidelines specified by McNeill & al. (in Taxon 64: 163–166. 2015; cf. clause (1) under “Conservation and rejection procedures”) and applying ICN Art. 14.1–14.4, we propose here to conserve the name Alexandrium against Blepharocysta. Acceptance of our proposal will assure nomenclatural stability in Alexandrium (though it requires a nomenclatural transfer from P. splendor-maris to Alexandrium). This has particular importance as many species of Alexandrium are toxic, and the generic name is not only used in the biological scientific community but also by chemists, medical scientists such as toxicologists, veterinarians, administrators, and policy makers (Hallegraeff & al., l.c.). The rejection of Blepharocysta appears acceptable, as the name is rarely used in its original sense (Ehrenberg, l.c. 1873) but rather in the incorrect interpretation of Stein (l.c.). Unless the alternative proposal by Carbonell-Moore (l.c.) were to be accepted (see below), rejection of our proposal would force all species names today accepted under the well-established name Alexandrium (approximately 33 species, many of which have been intensely studied) to be transferred to Blepharocysta (currently with the only acceptable element P. splendor-maris). This would cause severe nomenclatural instability, and such new combinations would most likely not be accepted by the scientific community. Our proposal causes disadvantage regarding the deviant concept of Blepharocysta only. It is described by Carbonell-Moore (l.c.), who aims at preserving the misapplied usage of B. splendor-maris in the interpretation of Stein (l.c.) under ICN Art. 14.9 with a conserved type, namely with pl. VII 17. The strategy would be justified in case of the absence of original material assignable to P. splendor-maris but in this case, Ehrenberg’s specimens and drawings clearly date prior to the publication of the name (Elbrächter & al., l.c.). Overall, the proposal by Carbonell-Moore (l.c.) aims at an easy but ambiguous solution to preserve current misapplications of Blepharocysta (including 12 names, 9 of them species including synonyms, all of them scarcely observed). However, accepting this solution would neglect Ehrenberg’s careful documentation of the species. Furthermore, Stein’s misidentification cannot be brought in line with Ehrenberg’s protologue data including the species description (see Elbrächter & al., l.c.). According to our studies, Stein’s concepts of Blepharocysta and B. splendor-maris, currently only consisting of a misapplied name and some drawings, do not need any conserved type but new formal descriptions and legitimate and validly published names as well as a contemporary physical type, independent of Ehrenberg’s (l.c. 1860) observations. Later names, formally linked to Ehrenberg’s concept and characterised by original material but based on the misapplication of Blepharocysta would remain available to serve as basionyms for appropriate combinations (ICN Art. 56.1 Note 1). For the authors of this proposal rejection of Blepharocysta in favour of Alexandrium is a higher good than preserving misapplications of Blepharocysta by means of a conserved type.
case, Ehrenberg's specimens and drawings clearly date prior to the publication of the name (Elbrächter & al., l.c.). Overall, the proposal by Carbonell-Moore (l.c.) aims at an easy but ambiguous solution to preserve current misapplications of Blepharocysta (including 12 names, 9 of them species including synonyms, all of them scarcely observed). However, accepting this solution would neglect Ehrenberg's careful documentation of the species. Furthermore, Stein's misidentification cannot be brought in line with Ehrenberg's protologue data including the species description (see Elbrächter & al., l.c.). According to our studies, Stein's concepts of Blepharocysta and B. splendor-maris, currently only consisting of a misapplied name and some drawings, do not need any conserved type but new formal descriptions and legitimate and validly published names as well as a contemporary physical type, independent of Ehrenberg's (l.c. 1860) observations. Later names, formally linked to Ehrenberg's concept and characterised by original material but based on the misapplication of Blepharocysta would remain available to serve as basionyms for appropriate combinations (ICN Art. 56.1 Note 1).
For the authors of this proposal rejection of Blepharocysta in favour of Alexandrium is a higher good than preserving misapplications of Blepharocysta by means of a conserved type.
590
(2687) Proposal to conserve the name Phyllopsora against Triclinum and Crocynia (Ramalinaceae, lichenized Ascomycota) (1860). Accordingly, we here propose to conserve the name Phyllopsora against Triclinum and Crocynia. Because Symplocia is already rejected in favour of Crocynia, conservation of Phyllopsora against the latter will preclude adoption of Symplocia because a "rejected name […] may not be restored for a taxon that includes the type of the corresponding conserved name" (Art. 14.7 of the ICN -Turland & al. in Regnum Veg. 159. 2018 The genus Symplocia was described by Massalongo in 1854 based on the single species Lichen gossypinus Sw. In 1860, Massalongo replaced the name Symplocia with Crocynia, based on Lecidea sect. Crocynia Ach., also originally based on L. gossypinus only. No other names have been introduced in Symplocia. Crocynia has already been conserved against Symplocia (ICN, App. III), and so, as noted above, if Crocynia is rejected in favour of Phyllopsora, Symplocia cannot be taken up for this genus.
The genus name Phyllopsora is widely known to lichenologists working on tropical material, whereas the name Triclinum has, to our knowledge, not been used between its introduction in 1825 and the typification by Jørgensen (l.c.). Moreover, accepting the name Triclinum instead of Phyllopsora would require 56 new combinations. Adopting Crocynia would necessitate an inconvenient re-circumscription of a genus already in a taxonomic disarray caused by the historical inclusion of distantly related species, of which the type material has mostly been lost.
Adopting the oldest name, Triclinum, would avoid adding more exceptions to the rules of nomenclature, and would involve reinstalling a name with almost no previous use for a genus that has recently been given a new taxonomic concept. Thus, it would be Timdal (l.c. 2008) synonymized Squamacidia and Triclinum with Phyllopsora, and suggested that Phyllopsora be proposed for conservation, should this synonymization be supported by future studies. The generic circumscriptions in a recent molecular phylogenetic evident which species truly belonged in the genus and whether or not the new taxonomy had been followed. Adopting the name Phyllopsora, on the other hand, would result in renaming only the few morphologically well-distinguished and recognized Crocynia species to Phyllopsora. At the same time, it should be noted that the circumscription of the genus Phyllopsora sensu Kistenich & al. (l.c.: in press) more closely resembles its earlier understanding (Swinscow & Krog, l.c.;Brako, l.c. 1991) than the more recent (Timdal, l.c. | 1,544.2 | 2019-06-01T00:00:00.000 | [
"Chemistry"
] |
Designing problems of antennas integrated with solar batteries
Designing problems of microstrip antennas placed on the solar battery surface and using it as a substrate are considered. Computational models taking into account the elements of its costruction and parameters of solar battery were performed by means of ANSYS HFSS. A number of numerical experiments for several antenna models were fulfilled. The aim of them is to design microstrip antenna integrated with solar battery without decreasing of solar elements effectiveness. The results of numerical experiments for several projects of microwave radiators with solid or mesh surface of the patch are presented
Introduction
The creation of new technologies in the field of electronics, the development of modern element base allowed to drastically reduce the sizes and weights of the devices. A lot of problems prevent the further progress in this direction [1]. In this regard there are appeared a number of papers which attempt to project an antenna integrated with the other components of electronic systems, and in particular with the aircraft board covers [2,3], and solar cells [1, 3 -11]. The last direction is of particular interest, because solar panels have considerable dimensions making possible to place a number of microstrip radiators.
Currently there are three types of solar cells: monocrystalline, polycrystalline, and on film (amorphous). The last ones are at the stage of development and, as a rule, is not used. Single crystal having of the same size can produce more energy as compared with a polycrystalline, but has a shorter term degradation and is more sensitive to luminance decrease.
In the process of developing such an integrated antenna engineers must solve two opposite problems: to design an antenna with prescribed parameters (input impedance, radiation pattern, gain, operating frequency range) and do not impair the characteristics of the solar cell (energy conversion efficiency). In this article, we present a brief overview of some of the published attempts of antennas designing, integrated with solar panels and show some results of electromagnetic simulation of microstrip antennas, located on the surface of the solar battery.
Variety of integrated antennas with a reduced battery shading
Analysis of publications dealing with the problem of creating integrated with solar panels antenna shows that the proposed designs are usually based on the use of batteries as the substrate surface of the patch antenna. There are a special interests in solving problems [1][2][3] of avoidance of a significant reduction of solar battery effectiveness caused by shading the battery surface with the structural elements of the antenna. The simplest design integrated antenna suggest placing microstrip element directly on the surfaces of the battery [3,8,12] or slightly raised above it [7,8]. In the first embodiment there is situated microstrip mesh of thin wires which forms "ground plane screen" for the antenna. The radiating microstrip patch element is placed on the dielectric layer. Its configuration and dimensions selected so as to provide the required impedance, polarization and directional characteristics.
For improvement of antenna matching some authors suggest to use metamaterial as a dielectric substrate [15]. Along with dipole radiators there are papers devoted to investigation of slot radiators [4,13] constructed using the same principles. The antennas "raised" above the surface of the solar battery used the metallic [8] or dielectric supports [9], retention metallic patches.
In order to improve the transparency of the antenna element and reduce shadowing of solar battery surface there known investigations not only "mesh" radiators [4,12], but also radiators made of optically transparent conductive films based on Indium Tin Oxide, Antimony Tim Oxide, Titanium Indium Oxide, Gallium Zine Oxide or AgHT-4 [11,14]. There are variations of a lower degree of integration, in which the solar battery do not replace ground plane, but joined with it as an additional elements [13,16].
Below there are presented the results of computer models research of the two antennas integrated with solar battery making possible increasing the efficiency due to reducing the shadowing its surface.
The solar battery substrate as a microstrip antenna
Consider a model of an integrated antenna, in which the role of sophisticated composite substrate performs a solar cell. Microstrip radiator with length L and width W disposed on a thin transparent acrylic substrate, the "raised" above the surface of the solar battery up on height h (Fig. 1). The patch surface is not solid, but made from a grid of conductive strips of width t. Dimensions of the grid cells were selected in the process of optimizing the antenna parameters. The simulation was performed in WLAN and WiMAX frequency bands when changing the width of grid conductors and their number on the opposite sides of the patch. .Radiator's dimensions are equal to L = 37,2mm; W = 51,4 mm; h = 5 mm, which provides best matching at 2.45 GHz. The width of the wire array-valas varies from 0.2 mm to 1.0 mm, and the number of conductors on the sides of the patch was varied from 5 to 15. Characteristics of mesh radiators were compared with the characteristics of patch radiators with a solid surface. The calculation results show that the change in width of the conductors leads to a slight change of input impedance in frequency band and is manifested in a certain increase of the fluctuations of the curve (5-12%). For this reason, a grid of thin wires was 0.2 mm, more preferably, because it provides high value of antenna element about 0.81. For comparison, a grid of conductors of 1.0 mm width provides transparency 0.07 with almost the same values of the input radiator impedance.
The number of mesh wires has more significant impact on the antenna parameters. Radiators parameters are most sensitive to the number of wires in a plane parallel to the lines of electric current (E plane). Their numbers were chosen in the process of the radiator performance optimizing. The criterion of optimum was chosen as closeness of curves selected frequency dependence of resistance and reactance of the mesh and fully metallic radiators. The best result provides a grid of conductors 15 in the E-plane and 10 conductors in the H-plane. Fig. 2 and 3 show frequency characteristics of the active and reactive input impedance of the radiator with a mesh (curve 1) and metallic patch (curve 2). The simulation results show that using of mesh patch provide some shift of resonance towards lower frequencies.
Replacing the solid surface of the patch to the wire mesh does not have a significant impact on the direction properties of antenna. Fig. 4 and 5 show the antennas radiation pattern in the E plane and H plane. Modeling of antennas integrated with a solar battery ( fig. 1) has shown that the replacement solid patch surface to the mesh leads mainly to a certain shift of the antenna resonant frequency while maintaining the critical parameters at the same level and greatly reduce shading of solar cell surface.
The solar battery as a patch of antenna radiating element
Due to the fact that the upper and lower solar battery contacts are wire mesh it is interesting to simulate the integrated antenna, in which solar battery element is used as a patch with the desired dimensions. Consider the antenna structure with a substrate of FR-4 dimensions of
Conclusions
The numerical experiments show that using of mesh surfaces as a patch in the design of antennas, integrated with solar panels, makes it possible to largely resolve one of the major contradictions of the creation of such the radiators and significantly reduce the shading surfaces of the battery, keeping its energy efficiency. While calculating there may be required necessary adjustment the resonant dimensions of mesh radiators. Integrated antenna using a solar cell as a radiator, may be used for constructing an antenna array. Such an approach will not only improve the directional properties of the antenna, but also increase the battery capacity due to increasing of its total surface.
The studies and the results of computational experiments demonstrates the principle possibility of the development of integrated systems that combine in one device solar cell and antenna system. Optimization of constructive parameters allows to receive microstrip antennas and antenna arrays with acceptability characteristics and the least possible shading of solar batteries. | 1,906.8 | 2019-01-01T00:00:00.000 | [
"Computer Science"
] |
Irreversible vs Reversible Capacity Fade of Lithium-Sulfur Batteries during Cycling: The Effects of Precipitation and Shuttle
Lithium-sulfur batteries could deliver significantly higher gravimetric energy density and lower cost than Li-ion batteries. Their mass adoption, however, depends on many factors, not least on attaining a predictive understanding of the mechanisms that determine their performance under realistic operational conditions, such as partial charge/discharge cycles. This work addresses a lack of such understanding by studying experimentally and theoretically the response to partial cycling. A lithium-sulfur model is used to analyze the mechanisms dictating the experimentally observed response to partial cycling. The zero-dimensional electrochemical model tracks the time evolution of sulfur species, accounting for two electrochemical reactions, one precipitation/dissolution reaction with nucleation, and shuttle, allowing direct access to the true cell state of charge. The experimentally observed voltage drift is predicted by the model as a result of the interplay between shuttle and the dissolution bottleneck. Other features are shown to be caused by capacity fade. We propose a model of irreversible sulfur loss associated with shuttle, such as caused by reactions on the anode. We find a reversible and an irreversible contribution to the observed capacity fade, and verify experimentally that the reversible component, caused by the dissolution bottleneck, can be recovered through slow charging. This model can be the basis for cycling parameters optimization, or for identifying degradation mechanisms relevant in applications. The model code is released as Supplementary material B.
Lithium sulfur (LiS) batteries have the potential to provide a step change in performance, compared to Li-ion batteries, with an expected practical energy density of 700 Wh kg −1 compared to that of the intercalation Li-ion batteries, of 210 Wh kg −1 . 1,2 Added benefits, such as a potential low cost due to the abundance of the active materials, low toxicity and relative safety, 3 make them an attractive energy storage solution for a wide variety of applications, such as space exploration 4 and low temperature energy delivery. 5 However, the relatively low power capabilities, significant self discharge and low cycle life have so far hindered mainstream adoption of LiS cells. Degradation mechanisms such as lithium anode corrosion, self discharge and low coulombic efficiency have all been related to the polysulfide shuttle. As a result, most effort in the research community is currently directed toward decreasing the amount of shuttle through material design, and assessing the properties of the proposed materials through coin cell characterization.
We argue that equally important for accelerating the adoption of this battery chemistry is the understanding of how real cells behave under real operating conditions, which often include incomplete charge/discharge cycles, noisy current loads and rest periods at various states of charge (SoC). Understanding and detecting the mechanisms leading to degradation, such as capacity fade, are intermediate steps crucial to predicting cycle life. Such understanding can help harvest the most from a given LiS cell, by informing the tuning of operation conditions, within the range allowed by a particular application, to achieve optimum performance.
Experimental studies into the effect of varying the voltage window and current rate provide some indication of the factors limiting cycle life. Degradation of cell performance is generally observed for procedures that allow the cell to be in high or low voltage ranges. Avoiding these regions has been successful at increasing cycle life, such as by limiting the operation to the low voltage plateau, and thus using only 75% of the cell capacity. 6 At the higher end of the voltage range, the oxidation of polysulfides in reaction with the electrolyte, enhanced z E-mail<EMAIL_ADDRESS>by the presence of shuttle, has been hypothesized as the main cause of capacity fade. 7 The additive often used as a shuttle suppressant, LiNO 3 , was found to not alleviate this problem permanently, as it is consumed by oxidative reaction with polysulfides, thus contributing itself to capacity fade. At the lower end of the voltage range, the slow dissolution of the precipitated low order polysulfides was shown to limit cycling performance. 8 The presence of this bottleneck and its effect during a constant current charge have also been predicted by and analyzed via a zero dimensional model of the LiS cathode. 9 The same effect, i.e. the accumulation of a non-conductive film of insoluble Li 2 S 2 /Li 2 S, was shown to be correlated to the largest capacity fade, from amongst three possible causes. 10 Discharging below 1.8 V was shown to greatly accelerate degradation in coin cells with catholyte and shuttle suppressant, effect interpreted to be the result of producing poorly reversible Li 2 S from Li 2 S 2 . 11 Most of these studies, however, are performed on coin cells with excess electrolyte and low electrode loading, which were shown to have markedly different capacity fade rate and mechanism 12,13 compared to a commercially viable cell. 14 The effect of previous cycles on the present performance of the LiS cell can be strong and determining in applications; however, it has received little attention. It was found that performance is affected by the order in which the cycling rate is varied and by whether both the charge and discharge rates are varied. 15 The results of this comprehensive study also indicate that there is a reversible and an irreversible contribution to capacity fade, since long-term and short term cycling data can indicate opposing cycling procedures and cell compositions as optima. They conclude that a 'medium' charging rate is the best compromise between limiting the overpotential due to Li 2 S oxidation and minimizing shuttle.
These results highlight the need for a validated model that can aid in exploring the large number of combinations of operating conditions, from which an optimum must be found. Increasing the cycle life, such as by reducing the voltage window, usually comes at the cost of reduced energy throughput per cycle. But under which operation mode is the energy throughput per cell life maximized? Currently, there is no available tool to quantitatively inform such decisions, and only few degradation-aware models for LiS have been published. A A6108 Journal of The Electrochemical Society, 165 (1) A6107-A6118 (2018) probability-based mathematical model was proposed for discharge, and reproduced capacity fade of coin LiS cells. 16 This model, however, does not address the physical phenomena that cause the fade. Two physical models have been used for predicting cycling behavior. The one dimensional model developed by Hofmann et al. includes shuttle as a reaction occurring at the anode, and thus predicts Coulombic inefficiency. 17 Also at the anode, lithium sulfide can precipitate and accumulate, thus leading to capacity fade with cycling. The 1D+1D model proposed by Danner et al. describes a cell with polysulfides confined by carbon particles in the cathode. 18 In their model, an irreversible loss of active material occurs as a result of S 2− ions escaping the cathode confinement, which occurs especially when a high solubility of Li 2 S is assumed. A comparison to experimental data for validation is not included with either model.
Both physical degradation-aware models are one dimensional. One dimensional models generally suffer from a number of drawbacks, making them, at least for now, unsuitable for assessing battery performance either in applications, or as a basis for identification and control algorithms. They include a large number of parameters, with values not readily available, are difficult to parameterize and computationally costly, making them unsuitable for simulating cell behavior over many cycles. Zero dimensional models have the potential to become the ideal platform for cycle life studies, due to three essential features. They were shown to capture many of the relevant features of LiS behavior during both charge and discharge, require relatively short computational times, and depend on relatively few parameters, allowing tractability of all included mechanisms. 9 Here, we further develop a previously proposed zero-dimensional physical model, with the aim to develop a quantitative tool for improving the cycle life performance of LiS cells by informing the optimal cycling conditions. The model predictions are validated against experimental data of pouch cells with competitive performance characteristics.
In this work we illustrate that cycling a LiS cell within a limited SoC window, i.e. capacity-limited partial cycling, leads to gradual SoC and voltage drift, thus accelerating the apparent aging rate of the cell. The observed behavior can be predicted by a zero dimensional model that includes a simple mechanism of two electrochemical reactions, kinetic limitations, and precipitation/dissolution of one reaction product. It is shown that shuttle and dissolution rates together dictate the cell behavior under cycling. Shuttle-related degradation via loss of active material is required to reproduce the apparent cell death observed in experiments. The apparent capacity, as measured from the current throughput, is shown to be an extremely poor measure of the cell SoC; instead, the available and dormant capacities predicted by the model should be used instead for state of charge estimation. Based on the understanding gained from model simulations, we are able to propose an approach to increasing cycle life.
Experimental Results and Discussion
Partial cycling experiments were performed on 3.4 Ah cells manufactured by OXIS Energy Ltd. The cells contain a sulfone-based solvent and their electrolyte/sulfur ratio has been optimized in order to deliver maximum cell-level specific energy density, thus indicating a relatively high sulfur and low electrolyte loading compared to the cells in most other published results. In order to emulate their use in an application where a fixed amount of energy is required each time, the charge throughput for charge and discharge was limited at 1.02Ah, calculated by coulomb counting (Q = I · t) and corresponding to 30 % SoC. Additionally, safety voltage cutoffs were set at 1.5 V and 2.45 V. Before cycling, cells were charged at 0.1C (0.34 A) to 2.45 V, and then discharged at 0.2C (0.68 A) by 0%, 35% (1.2 Ah) or 70% (2.4 Ah) of the nominal capacity, to correspond to cycling starting from fully charged, two thirds charged and one third charged.
The voltage and charge throughput (in Ah) during cycling of a cell at 0.3C charge/0.3C discharge (1.02 A/1.02 A) starting from fully charged are illustrated in Figure 1 of the voltage envelope is the voltage drift: despite seeing the same charge throughput during charge and discharge, the cell voltage hits the lower and then the higher voltage cutoff. Once the latter occurs, a sharp decrease in charge throughput leads to apparent cell death. Figure 2 illustrates charge and discharge voltage curves of sample cycles belonging to each of the three identified stages. Stage I is characterized by a narrowing of the voltage envelope: the upper voltage decreases while the lower voltage increases. Charge and discharge capacities, as measured by charge throughput, are equal and constant, illustrating the fact that both charge and discharge can occur for the pre-set time, without causing the cell to reach either voltage cutoff. The discharge voltage curves for cycles number 3 and 4 in Figure 2 indicate that the capacity delivered in the high plateau is quickly and significantly reduced, leading to a drift in the average voltage during discharge. Whether this drift is an effect of the shuttle, of the dissolution bottleneck, or of another cause, cannot be ascertained without further investigation, and is discussed in the Model predictions without degradation section and Model predictions with degradation section. The drift is also visible during charging, during which the higher plateau is reached progressively later within the SoC window.
Stage II begins once the lower cutoff voltage is reached; now the upper voltage increases with every charge. The charge capacity remains constant, while the discharge capacity decreases with every cycle. Discharge occurs for increasingly shorter time durations because the lower voltage cutoff is reached sooner. Importantly, the difference between charge and discharge capacities should not be interpreted as charging inefficiency due to shuttle; losses due to shuttle exist, but cannot be identified directly from this data, because the cycling procedure fixes the charge throughput. Cycles number 100 and 150 happen to correspond to the beginning and end of stage II; the cell voltage during these two cycles is plotted in Figure 2. In stage II, the decrease in discharge capacity comes solely from a decrease in the capacity of the lower plateau. The plateau voltage is considerably lower at the end of stage II, indicating a significantly increased cell resistance. The charge at the beginning of this stage shows behavior consistent with that in stage I, namely an increase in the amount of time spent in the lower plateau compared to the first cycles. At the end of stage II, however, charging reaches again the higher plateau. It is difficult to ascertain the cause of these features.
Stage III begins once the higher voltage cutoff is reached. Cycling has become limited by the two voltage cutoffs. Charge and discharge capacities decrease because the time to reach the voltage cutoff decreases abruptly from one cycle to the next. The discharge capacity is slightly higher than the charge capacity during most of stage III. The cause for this difference has not been explored.
The presence of the three stages in the partial cycling behavior remains consistent during the following variations to the standard cycling conditions described above. Plots of the corresponding experimental data are included in Supplementary material A. Initial capacity. When the cell is cycled starting from a more discharged state, the total number of cycles before the apparent cell death is smaller. Cells starting the procedure fully charged cycle for longer than those starting from 65% SoC, and similarly for 65% vs 30%, as illustrated in Figure 1 in Supplementary material A. The difference in cycle life is almost solely due to a decrease in stage one. Current rate for both charge and discharge. Decreasing the current rate for both charge and discharge to 0.15C greatly increases cycling life, shown in Figure 2 in Supplementary material A. In this case, stages I and II, measured in cycle numbers, are significantly longer. Upper voltage cutoff. Decreasing the upper voltage cutoff limit to 2.33V greatly increases cycle life, due to an increase in the number of cycles in stage I, possibly due to limiting the effect of shuttle, shown in Figure 3 of Supplementary material A. Only stage I and II appear, because the low upper voltage limits the voltage increase.
Conclusions from experimental data.-The voltage drift followed by a decay of charge throughput per cycle are the main features of the cell behavior, both with important implications for applications. During stage III, discharges belonging to various cycle numbers start from the same voltage but yield starkly different capacities, illustrating a strong effect of cycling history on current performance. Understanding and tracking the effects of this history effect are essential not only for ensuring reproducibility of data, but also for designing robust estimation and control algorithms. Because of a mostly flat voltage plateau during discharge and of shuttle during charge, coulomb counting and Figure 3. Schematic of the model: The current at the cell terminals is the sum of the currents of the two electrochemical reactions occuring in parallel. High order polysulfides (S 0 8 ) convert to middle order polysulfides (S 2− 4 ) also in the absence of current, as a result of the polysulfide shuttle. A fraction of the shuttled material is lost irreversibly. Part of the final product (S 2− ) precipitates/dissolves according to a precipitation constant.
voltage-reading are both ineffective, leaving no direct experimental tool for the identification of the LiS SoC. 19 The voltage drift is expected to be the result of shuttle and incomplete dissolution. The effect of slow dissolution is expected to be weaker for lower charging currents, thus explaining the overall positive effect of using low-rate currents. Many of the more detailed features of the cell's response to the cycling procedure, however, cannot be explained from this data alone. The main features requiring further exploration are listed below.
1. As the voltage decreases in stage I, the effect of shuttle is expected to decrease. What causes the increase in the upper value of the voltage envelope during stage II in Figure 1? 2. The cause of the abrupt decrease in charge throughput in stage III is not clear (Figure 1). 3. The transition between low and high plateau during charge seems to move to later times and then back to earlier times during the cycling procedure, as visible in Figure 2. Why? 4. In stage I the high plateau capacity becomes lower, while during stage II the low plateau capacity decreases. What causes this?
We conclude that a model is required to track the true capacity of the cell, the current lost through shuttle, and the quantity of undissolved precipitate throughout cycling. Such a model should be able to differentiate between irreversible and reversible capacity loss, and could be used as a tool for designing optimum procedures for capacity recovery.
Model
The proposed model is an improvement to the zero-dimensional electrochemical model of a LiS cathode published by Marinescu et al., which was shown to reproduce main LiS cell characteristics during charge and discharge. 9 Below, the complete set of equations is summarized, while Figure 3 illustrates a schematic of the various mechanisms included. A description of the variables and parameters used, together with their values, is given in Table I.
The model assumes that a single electrochemical reaction dominates each of the two discharge regions. The two reactions allowed in the system are and take place during both discharge and charge. This reaction mechanism corresponds to that chosen by Mikhaylik and Akridge 22 and represents the simplest reaction mechanism that includes both precipitating and non-precipitating final discharge products. When fully charged, the sulfur in the cell is assumed to be mostly in the form of S 0 8 , a dissolved species. Because the upper voltage limit mass of precipitated S 2− g S s mass of shuttled sulfur g S l mass of lost sulfur g Auxiliary variables c th theoretical capacity Ah is set to 2.45V, it is expected no significant quantities of solid S 8 are formed. The theoretical capacity of the cell c th , corresponding to the true SoC, is given by where n 4 = 4 represents the number of electrons contributed by each of the two reactions in Equation 1, n S8 , n S4 the number of sulfur atoms in each polysulfide, S 0 8 , S 2− 4 the amounts of sulfur dissolved in the electrolyte in the respective form in grams, F the Faraday constant, and M S the molar mass of sulfur. Each S 0 8 molecule contributes twelve electrons to the capacity of the cell, and each S 2− 4 four electrons. The value of the true instantaneous cell capacity can be markedly different from the capacity assumed by coulomb counting, which remains constant from one cycle to another, for the partial cycling procedure used here. The expression in Equation 2 is valid only for the reaction path chosen in the model, as given in Equation 1.
The equilibrium potential for the two reactions is assumed to obey the Nernst equation [3a] where E 0 H , E 0 L are the standard potentials for Equation 1a and 1b respectively, R is the gas constant, and T the temperature. The factors f L , f H ensure that species quantities measured in grams are compatible with the Nernst formulation in Equation 3: where v is the volume of electrolyte in the system. The total current through the battery is given as the sum of currents for the two reactions The currents associated with the two reactions are described by the Butler-Volmer approximation: where i H,0 , i L,0 are the exchange current densities, η H , η L the surface overpotentials, and a r the constant active surface area available for the high (H) and low (L) plateau reactions. The overpotential of each reaction is given by where V c is the cathode voltage, here assume to equal the measured cell voltage. The various polysulfide species evolve with time as a result of being produced or consummed by the two electrochemical reactions, the shuttle reaction and the precipitation/dissolution reactions: where S p is the mass of precipitated sulfur, ρ S its density, and S 2− * the saturation mass of S 2− , assumed to be constant. With the exception of the degradation term in Equation 8b, which is discussed in Degradation model section, these are the equations used in Marinescu et al. 9 Degradation model.-The model developed by Marinescu et al., 9 which does not include a degradation mechanism, cannot retrieve all features of the experimental data, as discussed in the Model predictions without degradation section. We therefore introduce a degradation mechanism, by allowing a fraction of the shuttled polysulfide to become permanently inactive. This mechanism leads to capacity fade and allows the model to capture more of the features observed experimentally. The rate of lost sulfur S l is defined by the term − fs m S S s k s S 0 8 in Equation 8b: where the dimensionless loss rate f s is a positive constant, with f s = 0 leading to no loss. Through this term, an amount of the shuttled S 0 8 does not convert to S 2− 4 , but is instead lost. The expression for S l assumes that the fraction of the shuttled material that is rendered inactive, fs m S S s , is proportional to the amount already shuttled, thus increasing as the cell ages further. This concept reflects one possible shuttle-related degradation mechanism: the shuttle reaction leads to a layer of precipitated products at the anode, 20 increasing the anode roughness, and thus providing an increased surface area for further reactions to take place. 21 In practice, this zero dimensional model captures an effective sulfur loss, irrespective of the location where this loss would occur in the real system. The total mass of lost sulfur at a given time can be obtained by integrating Equation 9, with the shuttled sulfur S s substituted from Equation 8f: . [10] In the current model, the shuttle and any associated degradation are considered to take place only during charge. While self discharge mechanisms do take place during rest, and have been associated with the polysulfide shuttle, 22,23 the relation between the shuttle during cycling and the self discharge during rest is not established. The purpose of this model is to simulate cycling behavior, and so self discharge during storage is not relevant, because no significant time is spent at rest. The presence of shuttle during discharge, its magnitude relative to that during charge, and an associated loss mechanism are much less documented in the literature. For simplicity, the effect of polysulfide shuttle during discharge is ignored in the following analysis.
Other degradation mechanisms have been identified experimentally, both related to and independent from shuttle. In particular, electrolyte depletion and lithium consumption are also expected to play a role in the cycle life of LiS batteries. 24 Developing a model to account for these effects remains the subject of further studies.
Model limitations.-In its current form, the model developed here cannot be used for quantitative prediction, because it does not reproduce the values of the discharge voltage plateaus seen in experiments. The values of E 0 H and E 0 L determine the elevation of the plateaus, and can be chosen further apart from one another, thus bringing them closer to the experimentally observed values. This, however, introduces a high computational cost, which is not justified yet in this model.
Perfectly fitting the predicted voltage curves to experimental data cannot yet be a target, as this model predicts only the voltage of the cathode, not of the whole cell. The Ohmic drop contributes significantly to the cell voltage, 25 and varies considerably during discharge. Thus the voltage drop due to the series resistance of the cell, as well as the anode potential, must be considered before fitting to experimental data. Finally, the model cannot retrieve the diminished capacity obtained with increasing discharge rate. Mass transport limitations, which are not accounted for in this zero dimensional model, have been shown to limit the power rate. 26 The effect of cathode passivation by the insulating precipitate Li 2 S at low states of charge, which has been associated with increased overpotential at the beginning of charging, 27 is also disregarded here. It should be noted, however, that Zhang et al. have shown that this effect can be attributed instead to transport limited reaction kinetics. 28 Retrieving the correct rate limitation during discharge remains the subject of future work. In the present study, the focus is placed on using the model as a tool to gather information on the mechanisms behind the observed cycling behavior.
Computational implementation and initial conditions.-Equations 3-8 form a differential algebraic system that can be solved for the thirteen unknowns: the Nernst potentials E H , E L , the currents from the two reactions i H , i L , the cell voltage V c , here equal to the cathode voltage, the overpotentials η H , η L , the mass of the five forms of sulfur S 0 8 , S 2− 4 , S 2− 2 , S 2− , S p , and that of the total shuttled sulfur S s . The Jacobian for the system is calculated analytically and the system is solved in Matlab using a second order solver. For the Matlab code please see Supplementary material B.
Changes in the initial operating conditions, such as the magnitude of the current, were found to lead to non-physical variations in the total sulfur mass in the system (eg. 3%). 9 This error occurred due to the way in which the initial conditions were calculated. For example, in the case of discharge from fully charged, initial values of V c , S 0 8 and S p were assumed, η H was calculated from 6a, under the assumption I = i H , and was thus dependent on I . E H was calculated from 7a, and S 2− 4 was then obtained through 3a, and was thus dependent on I . While such small differences in total sulfur mass do not affect the conclusions from the model results, they could become significant once a concentration-dependent series resistance is introduced, as recommended by Zhang et al. 25 Generally, voltage and current should not both be inputs to the model. Ideally, the initial conditions are obtained by solving the system of equations formed by Equation 3, E H = E L and mass conservation. Instead of solving a system with transcendental equations, an alternative procedure is used that ensures systems starting at different operational conditions are perfectly comparable.
For discharge from equilibrium, the initial state of the cell is calculated by assuming the quantities of S 0 8 , S 2− 4 (eg. 998:1 for a charged cell) and calculating the other polysulfide amounts under conditions of zero current and equilibrium of the precipitation reaction. The former condition imposes η H = η L = 0, and thus V c = E H = E L , while the latter sets the value of S 2− to S 2− * . The model is then run while linearly ramping the current across a short amount of time (0.1 s), from zero to the desired initial discharge rate. For charging from equilibrium, the state of a cell is calculated by running the model on a cell discharged at 0.1C, left to equilibrate against electrochemical reactions and precipitation/dissolution under zero current, and then charged with a brief (0.1s) linear ramping up of the current to the desired initial instantaneous charging current.
Model Results and Discussion
Model parameters.-The system parameters and their default values are given in Table I Figure 1; the voltage drifts downward, while the charge and discharge capacities are constant and equal. Despite a voltage drift being present, only two stages are visible in Figure 4a. The corresponding evolution of the mass of precipitate S p and that of the shuttled sulfur are shown for each cycle in Figure 4b. The mass of precipitated sulfur increases during stage I, while the shuttling amount decreases. During stage II, an equilibrium is reached between the input current and the losses through shuttle. It can be seen that, although the shuttle decreases as a result of the voltage drift, it never becomes zero.
The model predicts a voltage drift as the combined consequence of precipitate accumulation and charging inefficiency due to shuttle.
r The upper voltage decreases as a direct result of shuttle: during each charge a fraction of the input energy is lost through shuttle, while each discharge still occurs for the pre-specified 30% SoC. The usable capacity of the cell (Equation 2) at the end of a charge decreases, as illustrated in Figure 4c, and so does the maximum voltage reached.
r The significant decrease in the lower voltage, however, is a direct consequence of two mechanisms: firstly, according to this model, S 2− precipitates more readily than it dissolves, as explained below, and secondly the rate of the electrochemical reactions during charging is limited by the amount of Li 2 S that dissolves. This phenomenon will be referred to as the dissolution bottleneck. If the precipitation and dissolution processes were symmetrical, the lower voltage would remain constant, initially due to the fact that it is tracing the voltage of the lower plateau of a constant current discharge. Once the drift brought the cell to a low enough SoC, the shuttle impact would be diminished, and no further drift would occur. This can be verified by running the model with shuttle but no precipitation, such as by setting a high saturation concentration, which yields the voltage behavior presented in Figure 5a, with shuttled mass given in Figure 5b; without a dissolution bottleneck, stage I becomes infinite. The dissolution bottleneck, associated here with the precipitating species S 2− , renders part of the active sulfur temporarily unusable, and thus decreases the cell capacity with every cycle. The voltage drift occurs faster than in the case of no precipitation, as the lower limit of the voltage envelope moves along a constant current discharge curve of ever increasing current rate. This effective increase in the current rate leads to the lower voltage cutoff being reached significantly faster in a model with precipitation and no shuttle, Figure 5c, than in the one without precipitation, Figure 5a.
The model is highly sensitive to the value of S 2− * , which determines whether the amount of precipitate increases with cycling, or reaches a dynamic equilibrium; the behavior of the two types of cells is shown in Figure 5d.
The voltage curves predicted by the model with both shuttle and precipitation during selected cycles are illustrated in Figure 4d and are similar to the experimental data in Figure 2. The discharge capacity in the high voltage plateau decreases, followed by that of the low voltage plateau, consistent with a drift in cell SoC. While the cell remains within the higher plateau during the first cycle, it enters the lower plateau already in the second cycle, because the charge inbetween the two discharges is not complete, as a result of shuttle. Once the cell spends significant time in the lower plateau, precipitate starts to accumulate, and the usable capacity of the cell decreases. Charging inefficiency and reduced cell capacity both act to lower the average cell SoC from one cycle to the next.
The lower voltage cutoff is reached when the cell is fully discharged, corresponding to half the sulfur being in the form of S 2− or S p , as the maximum quantity of precipitate allowed by the assumed reaction mechanism. Further cycling repeats identically, as a dynamic equilibrium has been reached: the difference between charging and discharging times, or energy throughputs, compensates the effect of shuttle.
Mechanism of precipitate accumulation.-Precipitate accumulates during cycling because of the form of the precipitation/dissolution term in Equation 8e, and despite the parameters k p and S 2− * having the same value during charge and discharge. The mechanism of accumulation is visible in Figures 4e and 4f: r The average cell capacity decreases with cycling, and reaches progressively deeper into the lower plateau, allowing for precipitation to start occurring earlier in the discharge cycle; r The amount precipitated per unit time is significantly higher than the amount subsequently dissolved, due to the relation used for precipitation/dissolution. 29 The term (S 2− −S 2− * ) in Equation 8e allows infinite driving force for precipitation, because a higher current drives the electrochemical reaction faster and can build a supersaturated solution. On the contrary, the dissolution rate during charge is limited by 0 < S 2− < S 2− * ; r While precipitation stops as soon as the current changes from discharge to charge, dissolution can continue during the discharge, effectively providing a reservoir of active material with its own response time; r With further cycling, as more sulfur is trapped as precipitate, a large driving force for precipitation is more difficult to achieve, and the rates of dissolution and precipitation become comparable.
The presence of a third stage, as seen in the experimental data, is retrieved only by the model with loss.
The disappearance of the voltage dip within the first few discharges.-The disappearance of the dip within the first few cycles, visible in Figure 2 and in Figure 4e, can be understood by analyzing in more detail the model predictions for the first five cycles. The evolution of species, shown in Figure 6a, confirms the capacity drift conclusion -in effect, the capacity window of operation shifts toward lower capacities from one cycle to another. The precipitated S p increases and the interval of values taken by S 2− 4 shifts slowly past the 'mountain peak' associated with the boundary between plateaus. Figure 6b illustrates the evolution of the cell voltage, while Figure 6c illustrates the corresponding S 2− evolution during the same first five cycles. The model predicts that a pronounced dip quickly becomes shallower within the first few cycles. In the model, the voltage dip appears if the electrolyte is supersaturated with S 2− . This state cannot occur once precipitate has accumulated, because the total mass of dissolved S 2− is smaller and the rate of precipitation increases proportional to the amount of already precipitated material. The dip in-between the two voltage plateaus is a feature often but not always seen in the literature. In the experiments presented here, the dip is not as visible as predicted by the model, most probably because the cells have been pre-cycled. in Figure 7. Figure 7a shows the voltage of the cell exhibits the three stages observed experimentally. The corresponding species evolution is plotted in Figure 7b. As in the case of the model without degradation, illustrated in Figure 4, stage I is characterized by the accumulation of precipitate, and an associated decrease of the average cell SoC and total cell capacity. Once the cell is fully discharged, i.e. the maximum amount is precipitated, the lower voltage cutoff is reached, marking the beginning of stage II. In generating the data for Figure 4 and Figure 7, while all other parameter values remain the same, the shuttle rate is significantly different. Due to degradation, in order for stage I to have a similar length, the value of k s must be smaller in the case of loss than in the case of no loss.
In this degradation-aware model equilibrium cannot be reached during stage II. Despite relatively minor shuttle, the losses associated with degradation (S l in Figure 7b) gradually decrease the total capacity of the cell. The fixed charge/discharge current corresponds to increasing C-rate values, allowing the upper voltage to increase without a corresponding increase in the cell capacity at the end of charging. The true cell capacity, as calculated from the availability of dissolved sulfur species, is illustrated in Figure 7c. The dormant capacity corresponds to the amount of S p that is undissolved at the end of each charge, and thus momentarily unavailable, despite not being lost. The maximum capacity corresponds to the sum of active and dormant capacities in the cell, its decrease with cycle number corresponding to the amount of material lost through shuttling. Both capacities are obtained from the respective masses of sulfur, through the relation [mass] 3600 [11] where, in this case, the mass is either S p (for dormant capacity), or (m S − S l ) (for maximum capacity), both at the end of charge. Figure 7c makes it apparent that, during stage II, the cell is cycled between the same true capacity values, while its maximum capacity, given by the mass of sulfur that is not lost, is decreasing. The end of stage II is reached when the cell reaches the higher voltage cutoff when charged with the desired charge throughput. During stage III, there is not enough active capacity left in the cell, as given by the difference between the maximum and dormant capacities. In a model without loss, an increase of the upper voltage envelope can only be reproduced by assuming extremely strong precipitation, which is unrealistic, as it would lead to extremely poor cyclability. Each charge in stage III is voltage-limited, causing the true capacity at the end of each subsequent charge to decrease. The steep decrease in the charge throughput, apparent in Figure 7a, is thus a direct consequence of the voltage-limited cycling coupled with an ever-increasing effective C-rate of the charging current. The latter is indicated by the sharp increase in dormant capacity during stage III i.e. in the amount of Figure 4. The average cell SoC decreases during cycling, as seen by the increased contribution of the lower plateau reaction to the discharge capacity. The voltage dip in-between the plateaus, initially caused by supersaturation of the electrolyte with S 2− , disappears within the first few cycles. As precipitated sulfur accumulates, the conditions for a strong supersaturated state cannot be met. sulfur that remains precipitated at the end of each charge (also visible in Figure 7b), caused by the decreasing length of the charging halfcycle. Figure 7e and Figure 7f show the same mechanism of precipitate S p accumulation is occurring here, as in the case of the model without degradation.
The effects leading to reversible and irreversible losses are coupled in this model: the stronger the effect of precipitation, i.e. the more precipitated S p accumulates, the larger the contribution of the upper plateau reaction to the charging current, the more shuttle, and thus the more sulfur loss and capacity loss. This coupling is a consequence of the assumption made here, that the amount of shuttle is dependent on the amount of S 0 8 , and not directly on the cell voltage. The model with degradation seems to capture the important phenomena playing a role in the cell behavior during partial cycling: the three stages are retrieved due to an increase in the upper voltage during stage II, and the predicted charge and discharge voltages at various cycles in the procedure are qualitatively similar to experimental data, Figure 7d. Both the upper and the lower voltage plateaus become lower with cycling. A lowering of the voltage plateaus has also been retrieved by a one dimensional model proposed by Hofmann et al., 17 where precipitated Li 2 S accumulates with cycling on the anode. Experimentally, a reduced discharge capacity with cycling was linked to an effective increase in charge rates by Poux et al., 15 who probably witnessed a manifestation of the dissolution bottleneck.
Differences between the shape of the predicted and real voltage envelopes are mainly due to differences between the predicted and real cell voltage during a constant current discharge. These errors are inherent to the use of the relatively simple set of only two electrochemical reactions in the model. Model predictions with degradation: when assuming sulfur mass is rendered gradually inactive, such as due to shuttle, the three stages of cycling seen in the experimental data in Figure 1 can be retrieved. For a 3.4 Ah cell cycled at 0.3C/0.3C charge/discharge from fully charged for 3600 s/3600 s with a 2.21 V/2.38 V voltage cutoff, with degradation f s = 0.25, k s = 3 × 10 −5 s −1 : a) Voltage envelope and associated charge and discharge capacity throughput, calculated from I · t; b) rate of shuttling sulfur, and masses of precipitated and lost sulfur; c) instantaneous assumed and true capacities, as calculated from I · t (It) and concentrations of available species (true) from Equation 2; maximum theoretical capacity (max) corresponding to the mass of sulfur that is not lost through shuttle, and dormant capacity (dorm), corresponding to the amount of precipitate left at the end of each charge; d) voltage curves during discharge and the following charge at various points during cycling; e) and f) evolution of precipitated sulfur and the applied current during sample cycles (I < 0 during charge and I > 0 during discharge).
) unless CC License in place (see abstract Crucially, the model predicts that the lost capacity leading to the voltage drift is partially reversible, because it is partially caused by the accumulation of precipitate, active material which is, in theory, only temporarily unavailable. Charging via a lower current rate should recover at least some of this capacity, as it allows more time for the dissolution of S p to occur. Low current charging, however, is expected to also introduce more irreversible loss, as the cell remains in the shuttling region for longer. The effect of recovery cycles.-Based on model predictions, a cell identical to that described in Experimental results and discussion section was charged at 0.1C (0.34A) once every 25 cycles (instead of 0.3C), with 2.45 V upper voltage cutoff. The cell response is shown in Figure 8. As expected, the voltage drift is reduced; however, it is not eliminated.
Predictions of a cell's voltage under cycling with periodic recovery via slower charging are illustrated in Figure 9a for the model without loss and in Figure 9b for the model with loss. The model without degradation exaggerates the positive effect of the recovery cycle, while the model with degradation exaggerates its negative effect, compared to the experimentally measured behavior. It can be seen in Figure 9c that the full amount of S p is dissolved during a recovery cycle in the model without loss, such that, as long as recovery occurs each time before stage I ends, i.e. before the lower voltage cutoff is reached, the procedure can be repeated indefinitely. In the model with degradation, , almost the entire precipitated amount is dissolved during a slow charge, while considerably increasing the amount of irreversibly lost material. The latter effect is strong enough to significantly shorten the duration of stage I of the cycling procedure with recovery.
Alternative mechanisms for capacity fade should be explored in the model, with a focus on retrieving the experimentally observed effect on cycling performance. For example, the irreversible oxidation of low order polysulfides to higher order species at the cathode side can also lead to reduced kinetics and capacity fade due to loss of active material. 30 As this occurs at low voltages, an irreversible precipitation-related loss could account for the three cycling stages seen in experiments without attributing a strong negative effect to a slow recovery charge. The effect of other precipitation-related irreversibility, such as caused by details of the crystallization/nucleation and growth of the precipitated material, or changes to microstructure could also be explored. It has been shown that polysulfides react with the electrolyte also in the absence of shuttle, leading to loss of active material, in a system with added shuttle suppressant LiNO 3 reached close to perfect Coulombic efficiency, but still suffered from capacity fade. 27
Conclusions
Experimental data of LiS cells under capacity-limited partial cycling is used to develop an aging-aware model of a LiS cathode, and explore the mechanisms limiting the cell's cycle life. Cycling of LiS cells is shown experimentally to lead to voltage drift and apparent end of life, even when the Coulombic efficiency appears at its highest. Model predictions from a zero dimensional model confirm that this voltage drift is associated with a SoC drift, and emphasize the dangers of using coulomb counting or voltage reading for SoC estimation of LiS under cycling conditions. The model predicts the cell response is dominated by the interplay between charging inefficiencies due to shuttle and dissolution limitation during charging; the latter not only creates a bottleneck for the electrochemical reactions, but also leads to progressive accumulation of precipitation. While these conclusions are drawn from partial cycling data, they are also valid for complete cycles.
The model developed here is used to explain and verify the seemingly complex behavior under the capacity limited cycling, showing that many of the features observed are the result of the imposed constraints on operational parameters, such as the presence of the voltage cutoff. Some features can be retrieved only by a model with degradation, here implemented as loss of active material as a fraction of shuttled sulfur. The degradation-aware model indicates that Reversible capacity loss occurs when, during charging, the precipitated amount does not fully re-dissolve; this is verified by the fact that applying a slow charge helps recover some of that capacity loss and Irreversible capacity loss occurs when part of the active sulfur mass is lost, in this model due to shuttle; this can be verified by cycling the cell with a lower upper voltage cutoff, which shows improved cycle life, as shown in Supplementary material A.
These findings indicate that charging conditions are crucial to the true cell capacity and cycle life. Low rate charging leads to low charging efficiency and irreversible loss associated with shuttle. High rate charging leads to precipitate accumulation, and thus increased reversible capacity fade. The developed model can help distinguish between these two modes of capacity fade, and is thus an ideal platform for SoC estimation.
The degradation-aware model overestimates the negative effect of slow charging compared to the behavior seen in experiments. This indicates that a more complex description of degradation mechanisms is required. The model can become a quantitative tool to recommend operational parameters for improved cell longevity under various constraints, such as maximum energy or power per lifetime. The present work highlights the importance of studying the performance of LiS cells under realistic load cycles in accelerating their applicationtargeted improvement. | 10,875.2 | 2018-01-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Universal bounds on the performance of information-thermodynamic engine
We investigate fundamental limits on the performance of information processing systems from the perspective of information thermodynamics. We first extend the thermodynamic uncertainty relation (TUR) to a subsystem. Specifically, for a bipartite composite system consisting of a system of interest X and an auxiliary system Y, we show that the relative fluctuation of an arbitrary current for X is lower bounded not only by the entropy production associated with X but also by the information flow between X and Y. As a direct consequence of this bipartite TUR, we prove universal trade-off relations between the output power and efficiency of an information-thermodynamic engine in the fast relaxation limit of the auxiliary system. In this limit, we further show that the Gallavotti-Cohen symmetry is satisfied even in the presence of information flow. This symmetry leads to universal relations between the fluctuations of information flow and entropy production in the linear response regime. We illustrate these results with simple examples: coupled quantum dots and coupled linear overdamped Langevin equations. Interestingly, in the latter case, the equality of the bipartite TUR is achieved even far from equilibrium, which is a very different property from the standard TUR. Our results will be applicable to a wide range of systems, including biological systems, and thus provide insight into the design principles of biological systems.
I. INTRODUCTION
Biological systems maintain their functions by acquiring or using information about fluctuating environments. For example, E. coli regulates its flagellar motors by processing information about external ligand concentrations to adapt to the environment [1][2][3][4][5][6]. A gene network senses a sudden increase in protein concentration and then suppresses mRNA transcription to maintain protein levels [7][8][9][10]. While these systems rely on a negative feedback mechanism that suppresses intrinsic noise by using information about fluctuating environments, some molecular machines can even convert information into output work. Such examples include F o F 1 -ATP synthase, where F 1 motor converts energy and information provided by F o motor into the synthesis of ATP molecules [11][12][13]. To elucidate the general design principles underlying biological systems, it is necessary to investigate the fundamental limits on the performance of such information processing systems.
Stochastic thermodynamics has revealed various fundamental limits to the thermodynamic aspects of such fluctuating mesoscale systems [14][15][16][17]. For example, the thermodynamic uncertainty relation (TUR) states that suppressing the relative fluctuations of an arbitrary timeintegrated currentĴ T necessarily involves a thermodynamic cost [18][19][20]: where ⟨Ĵ T ⟩ and Var[Ĵ T ] denote the mean and variance of J T , and ∆σ denotes the total entropy production up to time T . While the validity of TUR in its original form (1) is limited to steady-state currents in Markov jump processes and overdamped Langevin dynamics, TUR-type inequalities even revealed that there is a fundamental limit to the performance of a thermodynamic heat engine. Specifically, a heat engine with a finite output power cannot achieve the Carnot efficiency as long as the fluctuation of the output power is finite [21][22][23][24]. Furthermore, for a stationary cross-transport system with input and output currents, which can be regarded as fuel (positive entropy) and load (negative entropy), respectively, the input-output fluctuation inequalities hold in the linear response regime [25,26]. These inequalities state that the fluctuation of the output current is smaller than that of the input current, while the relative fluctuation of the output current is larger than that of the input current.
In this paper, we aim to find similar fundamental limits for information processing systems, in particular for an information-thermodynamic engine that converts information into output work. Information thermodynamics, which is essentially stochastic thermodynamics for subsystems, is a thermodynamic framework for information flow between two interacting subsystems, either autonomous or nonautonomous [27]. This theory reveals that the information flow between subsystems can significantly affect the thermodynamic constraints of each subsystem. While information thermodynamics has its origins in the thought experiment of Maxwell's demon, it has recently been applied to information processing at the cellular level in biological systems [5,[28][29][30][31] and even to fully developed fluid turbulence [32].
Here, we consider a composite system consisting of a system of interest X and an auxiliary system Y , described by continuous-time Markov jump processes or diffusion processes with only even variables and parameters under time reversal. Our main results can be summarized as follows.
(i) Bipartite TUR.-We first extend the standard TUR (1) to a subsystem. For arbitrary timeintegrated currentĴ T for X with arbitrary observation time T , we prove that [cf. Ineq. (26)] Here, ∆S X tot denotes the entropy production associated with X, and ∆I X denotes the time-integrated information flow, which is the amount of information exchanged with the auxiliary system Y . The additional term δ J reflects the contribution of the interaction with Y . This bipartite TUR states that the relative fluctuation of the current for the subsystem X is lower bounded not only by the entropy production associated with X, but also by the information transfer between X and Y . In particular, if Y evolves much faster than X, we can further show that δ J → 0 in the steady state. In this case, the bipartite TUR gives a tighter bound than the standard TUR (1). While here we derive the bipartite TUR in the steady state, this relation is valid even for systems under arbitrary time-dependent driving from arbitrary initial states (see Appendix A).
(ii) Trade-off relations.-As a consequence of the bipartite TUR, we show that there are fundamental limits on the performance of an informationthermodynamic engine.
When the system of interest X acts as a steady-state informationthermodynamic engine, its performance can be quantified, e.g., by the negative entropy production rate |Ṡ X env | in the environment and the informationthermodynamic efficiency η X S := |Ṡ X env |/|İ X |, which quantifies how efficiently the engine converts information into the negative entropy production. In the typical case where the auxiliary system Y evolves much faster than the engine X, we prove universal trade-off relations between |Ṡ X env | and η X S [cf. Ineqs. (74) and (76)]: and where D S and D I denote the fluctuation of the stochastic medium entropy production and the time-integrated stochastic information flow, respectively. These inequalities state that an information engine with a finite negative entropy production rate cannot achieve η X S = 1 as long as the fluctuations D S and D I are finite. In order to achieve a finite negative entropy production rate with η X S = 1, the fluctuations D S and D I must diverge.
(iv) Input-output fluctuation inequalities.-As a direct consequence of the Gallavotti-Cohen symmetry, we show that the input-output fluctuation inequalities hold even in the case where information flow is regarded as an input or output current. That is, in the linear response regime where X acts as a steady-state information-thermodynamic engine, we prove that [cf. Ineqs. (122) and (123)] These inequalities state that the fluctuation of the output current (negative entropy production) is smaller than that of the input current (information flow), while the relative fluctuation of the output current is larger than that of the input current.
We illustrate these results with two simple examples: coupled quantum dots and coupled linear overdamped Langevin equations. Interestingly, the latter provides an example where the equality of the bipartite TUR is achieved even far from equilibrium. This is in contrast to the standard TUR (1), where the equality is guaranteed only in the near-equilibrium limit [20,36]. While the bipartite TUR is generally not valid for systems with broken time-reversal symmetry, such as underdamped Langevin dynamics [37][38][39][40][41][42][43], many relevant biological systems are often described by continuoustime Markov jump processes or diffusion processes with only even variables and parameters under time reversal. Therefore, these results will be applicable to a wide range of systems, including biological systems, and thus shed new light on our understanding of the design principles of biological systems. This paper is organized as follows. In Sec. II, we introduce important information-theoretic quantities and briefly review the framework of information thermodynamics in a general setup. In Sec. III A, we describe the bipartite TUR, which is the first main result of this paper. The detailed derivation of the bipartite TUR is presented in Sec. III B. In Sec. III C, we show that the bipartite TUR reduces to the form of the standard TUR if the auxiliary system evolves much faster than the system of interest. We discuss the equality condition of the bipartite TUR in Sec. III D. In Sec. IV, we show that the bipartite TUR gives universal bounds on the performance of an information-thermodynamic engine, which is the second main result of this paper. In Sec. V A, as the third main result of this paper, we prove that the Gallavotti-Cohen symmetry holds even in the presence of information flow in the fast relaxation limit of the auxiliary system. As a corollary to this symmetry, we show that the input-output fluctuation inequalities are valid even in the case where information flow is regarded as an input or output current in Sec. V B. In Sec. VI, we illustrate our results with two examples. In Sec. VII, we conclude this paper with some remarks.
II. SETUP
We consider a composite system that consists of two subsystems, X (system of interest) and Y (auxiliary system), whose time evolution is described by Markov jump processes or overdamped Langevin equations. Let x t and y t be the states of X and Y at time t, respectively. We assume that the system satisfies the bipartite property: the transition probability p(x t+dt , y t+dt |x t , y t ) satisfies p(x t+dt , y t+dt |x t , y t ) = p(x t+dt |x t , y t )p(y t+dt |x t , y t ) (8) for dt → 0 + . This property means that X and Y do not jump simultaneously in the case of Markov jump processes and that the noises acting on X and Y are uncorrelated in the case of diffusion processes. In this paper, we focus mainly on Markov jump processes, while the extension to the overdamped Langevin case is straightforward. Let p t (x, y) be the probability of state (x, y) at time t. The time evolution of p t (x, y) is described by the master equation: The rate matrix is assumed to be irreducible to ensure the uniqueness of the stationary distribution p ss (x, y). Note that X and Y can affect each other's transition rates, although they cannot jump simultaneously.
A. Information-theoretic quantities
We introduce important information-theoretic quantities. The strength of the correlation between X and Y can be quantified by the mutual information [44]: where p X t (x) = y p t (x, y) and p Y t (y) = x p t (x, y) denote the marginal distributions for X and Y , respectively. The mutual information is nonnegative and is equal to zero if and only if X and Y are independent.
The directional information from one variable to the other can be quantified by information flow [45], which is defined aṡ where p t (y|x) = p t (x, y)/p X t (x) and p t (x|y) = p t (x, y)/p Y t (y) denote the conditional probabilities. From the bipartite property, the sum ofİ X andİ Y gives the time derivative of the mutual information [46]: In the steady state condition,İ X andİ Y have opposite signs because d t I[X : Y ] = 0. Ifİ X > 0, the correlation between X and Y increases due to transitions in X. In other words, X gains information about Y . Ifİ X < 0, in contrast, X t+dt is less correlated with Y t than X t . In this case, the information is destroyed or exploited by X.
B. Second law of information thermodynamics
Here, we formulate the second law of information thermodynamics. To this end, we impose the local detailed balance condition to ensure that the system is thermodynamically consistent [15,17,47]. Then, the entropy change in the environment due to transitions in X and Y is identified aṡ The average rate of the system entropy is identified as the time derivative of the system's Shannon entropy S[X, Y ] := − x,y p t (x, y) ln p t (x, y): Then, the total entropy production rateσ is given bẏ where the nonnegativity is proved by using ln a ≤ a − 1 (a ≥ 0). The nonnegativity of the total entropy production rate is a manifestation of the second law of thermodynamics and is sometimes called the second law of stochastic thermodynamics [17]. From the bipartite property,σ can be decomposed into two parts:σ Here,σ X andσ Y denote the partial entropy production rate due to transitions in X and Y , respectively [48]: whereṠ Z tot (Z = X, Y ) can be interpreted as the entropy production rate associated with Z, which consists of the time derivative of Z's Shannon entropy S[Z] := − z p Z t (z) ln p Z t (z) and the entropy change in the environment due to transitions in Z: From the definition of the partial entropy production rates (18) and (19), it immediately follows thatσ X anḋ σ Y are individually nonnegative: This is the so-called second law of information thermodynamics (see also Fig. 1). The important point here is thatṠ X tot (Ṡ Y tot ) can be negative ifİ X (İ Y ) is negative. This apparent violation of the second law of thermodynamics caused by information flow lies at the heart of the mechanism of Maxwell's demon [27]. In this case, X acts as an information-thermodynamic engine that converts information into output work or negative entropy production. In the case where X (blue) acts as a steady-state information-thermodynamic engine, X converts information (İ X < 0) into negative entropy production (Ṡ Although only a single thermal bath is depicted here, our results hold even in the presence of multiple thermal baths.
III. THERMODYNAMIC UNCERTAINTY RELATION FOR BIPARTITE SYSTEMS
In this section, we explain our first main result, which can be regarded as an extension of the standard TUR (1) to bipartite systems. Hereafter, we assume that the whole system is in the steady state. See Appendices A and B for time-dependent cases.
A. Bipartite TUR LetĴ T be a generalized time-integrated current for the subsystem X with an arbitrary antisymmetric weight wheren y xx ′ denotes the number of transitions from the state (x ′ , y) to (x, y) during the time interval [0, T ]. For example, the choice of d y xx ′ = ln w y xx ′ /w y x ′ x yields the stochastic entropy change in the environment due to transitions in X during [0, T ]. In the steady state, the time-integrated stochastic information flow can also be expressed in this form (see (81)). We remark that when there are multiple environments with the label ν = 1, 2, · · · , the weight d y xx ′ can depend on ν. The ensemble average ofĴ T reads Our first main result is the following inequality: for arbitrary observation time T , where ∆S X tot := T 0 dtṠ X tot and ∆I X := T 0 dtİ X denote the entropy production and time-integrated information flow associated with X, respectively. Here, δ J := ⟨Ĵ T ⟩ q /⟨Ĵ T ⟩, and ⟨Ĵ T ⟩ q is defined as where q t satisfies the following equation with q 0 = 0: This additional current term ⟨Ĵ T ⟩ q reflects the contribution of the interaction with Y . Indeed, if the transition rate for X and the weight are independent of Y , i.e., w y xx ′ = w xx ′ and d y xx ′ = d xx ′ , we can prove that ⟨Ĵ T ⟩ q = 0. For the derivation, see Appendix B, where we consider the bipartite TUR in a more general case, applicable to a transient state. If, in addition, X and Y are independent and thus ∆I X = 0, then the bipartite TUR (26) reduces to the standard TUR (1), where the relative fluctuation of the current for X is lower bounded by the entropy production associated with X. In the general case where X and Y are correlated, the bipartite TUR (26) states that the relative fluctuation of the current for the subsystem X is lower bounded not only by the entropy production associated with X, but also by the information transfer between X and Y .
There are two important cases where the additional current term ⟨Ĵ T ⟩ q can be ignored and thus δ J → 0. The first case occurs in the short time limit T → 0. Since q 0 = 0, it immediately follows that ⟨Ĵ T ⟩ q → 0 as T → 0, while the information flow ∆I X remains finite in general. The second case occurs in the long time limit T → ∞ where there is a separation of time scales between X and Y . That is, the case where the observation time is long and Y evolves much faster than X. The proof of δ J → 0 in this case will be described in detail in Sec. III C. Since this case is typically realized when X acts as an information-thermodynamic engine, we will mainly focus on this case in the following sections.
B. Derivation of the bipartite TUR
Here, we prove the bipartite TUR (26) by using the generalized Cramér-Rao inequality [20,36,44,49]. We remark that the bipartite TUR can also be proved more directly from the master equation or Langevin equation [50] (see also Appendix A for the direct derivation for overdamped Langevin equations). We consider the following auxiliary dynamics parameterized by θ with p θ 0 = p ss : Here, w y xx ′ (θ) denotes the parameterized transition rate where Let P θ (Γ) be the parameterized path probability for the trajectory Γ = {x t , y t } T t=0 , wheren yy ′ x denotes the number of transitions from the state (x, y ′ ) to (x, y) during the time interval [0, T ], and τ y x denotes the empirical dwell time in state (x, y), defined as the total amount of time spent in state (x, y) along the trajectory Γ:τ where δ xxt (δ yyt ) denotes the Kronecker delta, which is 1 if x = x t (y = y t ), and zero otherwise. We denote by I(θ) := −⟨∂ 2 θ ln P θ (Γ)⟩ θ the corresponding Fisher information [44], where ⟨·⟩ θ denotes the average with respect to P θ . The generalized Cramér-Rao inequality then yields [20,36,44,49] Var Here, I(0) can be expressed as where we have used the inequality 2( . By substituting this expression into (29), we find that Then, ∂ θ ⟨Ĵ T ⟩ θ | θ=0 can be calculated as We thus arrive at the inequality (26).
C. Fast relaxation limit of Y
Here, we show that ⟨Ĵ T ⟩ q ≪ ⟨Ĵ T ⟩ in the long time limit T → ∞ if Y relaxes much faster than X. Let τ X and τ Y be the time scale of X and Y , respectively. We assume a separation of time scales: τ Y ≪ τ X , i.e., the auxiliary system Y evolves much faster than the system of interest X. This situation is typically realized when Y acts as a Maxwell's demon, i.e., when Y measures the state of X and performs feedback control [45]. We introduce a dimensionless slow time τ := t/τ X and a small parameter ϵ := τ Y /τ X ≪ 1. Correspondingly, we nondimensionalize the transition rates as w y xx ′ := τ X w y xx ′ and w yy ′ x := τ Y w yy ′ x . We first take the long time limit T → ∞, i.e., T ≫ τ X , and assume that q t (x, y) reaches a stationary solution q ss (x, y). Then, p ss and q ss satisfy the following equations: We now assume that p ss and q ss have asymptotic expansions in terms of the asymptotic sequences {ϵ n } ∞ n=0 as ϵ → 0: Here, we impose the normalization condition where we have introduced q X ss (x) := y q ss (x, y). Note that q X ss satisfies the normalization condition x q X ss (x) = 0.
By substituting these expansions into (38) and (39), we find that the leading order yields Let π ss (y|x) be the normalized zero-eigenvector that satisfies y ′ w yy ′ x π ss (y ′ |x) = 0. Due to the irreducibility of the rate matrix, this normalized zero-eigenvector is unique for each x. Then, from the normalization condition, p The subleading order of (38) and (39) yields Note that (48) and (49) ss with the matrix w yy ′ x , which has the left zeroeigenvector 1 because y w yy ′ x = 0. This property guarantees that the solutions p (1) ss and q (1) ss exist only under the solvability conditions: which correspond to (48) and (49) summed over y, respectively. Here, we have introduced the effective transition rate w xx ′ := y w y xx ′ π ss (y|x ′ ). Then, from the Perron-Frobenius theorem, q X ss can be expressed as q X ss = N p X ss , where N denotes the normalization constant. Because q X ss satisfies the normalization condition x q X ss (x) = 0, we obtain N = 0. Thus, in the fast relaxation limit of Y , we have Therefore, the additional current term ⟨Ĵ T ⟩ q appearing in the bipartite TUR (26) is much smaller than ⟨Ĵ T ⟩ in the long time limit T → ∞: To summarize, in the fast relaxation limit of Y , the bipartite TUR (26) reduces to the form similar to the standard TUR (1) in the long time limit: where D J := lim T →∞ Var[Ĵ T ]/2T denotes the fluctuation ofĴ T , and J := lim T →∞ ⟨Ĵ T ⟩/T denotes the mean current. Note that (55) gives a tighter lower bound on the fluctuation of a current than the standard TUR, becausė S X tot −İ X is smaller than or equal to the total entropy productionσ. If the partial entropy production of Y is zero, then (55) can also be obtained from the standard TUR.
D. Equality condition
The equality of the bipartite TUR in the fast relaxation limit of Y (55) can be achieved even far from equilibrium. This nontrivial fact will be shown later with a simple example in Sec. VI B. This property is a stark difference from the standard TUR, where the equality is guaranteed only in the near-equilibrium limit [20,36]. Here, before showing the example in Sec. VI B, we discuss a possible scenario to achieve the equality of the bipartite TUR (26) in a somewhat abstract manner.
We first consider the equality condition of the generalized Cramér-Rao inequality at θ = 0 (34). Because the generalized Cramér-Rao inequality is based on the Cauchy-Schwarz inequality, the equality condition is satisfied if and only if the following relation holds [36]: where C is a constant. The right-hand side of (56) is given by The left-hand side of (56) readŝ where we have decomposed the currentĴ T into two parts [50]: Note that ⟨Ĵ I T ⟩ = 0 and ⟨Ĵ II T ⟩ = ⟨Ĵ T ⟩. By comparing (57) and (58), we expect that d y xx ′ = CZ y xx ′ to be the optimal choice. However, due to the presence ofĴ II T − ⟨Ĵ T ⟩, the equality condition is generally not satisfied.
In the standard TUR, it is known that the equality can be achieved by including the generalized time-integrated static observable in addition to the currentĴ T [20,50,51]. Here, we consider the following generalized timeintegrated static observable: where ρ y x is an arbitrary weight that depends on a state (x, y). Even for the observableĴ T +Ô T instead ofĴ T , by following the same argument described in Sec. III B, we can derive the following bipartite-correlation TUR: where ⟨Ô T ⟩ q is defined as In this case, the equality condition of the inequality (62) is given bŷ Then, we find that the choice yieldŝ where we have used the fact thatÔ T = −Ĵ II T for this choice. Thus, the equality of (62) is achieved for this choice ofĴ T andÔ T . However, even if we can achieve the equality of (62), the equality condition of the second inequality (63) is not satisfied in general. Still, in the overdamped Langevin case, the inequality I(0) ≥ 2/(∆S X tot − ∆I X ) becomes an equality (see [52] for the detailed discussion). Therefore, the equality of the bipartite-correlation TUR is achieved in the overdamped Langevin case even far from equilibrium for the choice (66) and (67): where we have used ⟨Ô T ⟩ q = −⟨Ĵ T ⟩ q . In the long time limit T → ∞, (69) can be rewritten as where D I J := lim T →∞ Var[Ĵ I T ]/2T denotes the fluctuation ofĴ I T . Note that D I J is generally different from D J , and thus the equality (70) does generally not correspond to the equality of the bipartite TUR in the fast relaxation limit of Y (55). To put it another way, if D I J = D J in the fast relaxation limit, then the equality of (55) is achieved.
Therefore, the difference between D I J and D J is given by in the fast relaxation limit of Y . Thus, D II J = 0 is a sufficient condition for (55) to hold with equality. In Sec. VI B, we give an example that satisfies this sufficient condition.
IV. TRADE-OFF RELATIONS
In this section, we focus on the regime where Y evolves much faster than X and show that the bipartite TUR in this regime (55) provides trade-off relations for the performance of information processing systems. In Sec. IV A, we consider the situation where the subsystem X can be regarded as a steady-state informationthermodynamic engine, while the external system Y plays the role of a memory of Maxwell's demon. From the second law of information thermodynamics (22), this situation corresponds to the case where 0 < −Ṡ X env ≤ −İ X . While this situation may be typical in the regime of the fast relaxation limit of Y , we can also consider the case where the slow system X plays the role of a memory and measures the state of the fast system Y , i.e., 0 <İ X ≤Ṡ X env . Even for this case, we can show that the bipartite TUR (55) provides trade-off relations on the performance of the memory, which will be described in Sec. IV B.
A. Information-thermodynamic engine: Here, we show that the bipartite TUR gives several universal bounds on the performance of informationthermodynamic engines. In this case, both the entropy production and the information flow associated with X are negative and satisfy the relation 0 < −Ṡ X env ≤ −İ X . Then, the performance of an information-thermodynamic engine can be quantified by, e.g., the informationthermodynamic efficiency [45]: which satisfies 0 ≤ η X S ≤ 1 as a direct consequence of the second law of information thermodynamics. This efficiency quantifies how efficiently the engine X converts information into negative entropy production. In addition to this information-thermodynamic efficiency, the negative entropy production rate itself is an important indicator characterizing the performance of an informationthermodynamic engine. Here, we show that there is the following trade-off relation between η X S and |Ṡ X env |: where D S denotes the fluctuation of the stochastic medium entropy production ∆Ŝ X env , This inequality states that an information engine with a finite negative entropy production rate cannot achieve η X S = 1 as long as the fluctuation D S is finite. In order to achieve a finite negative entropy production rate with η X S = 1, the fluctuation D S must diverge. We can also prove a similar trade-off relation where the negative entropy production rate is bounded by the fluctuation of the time-integrated stochastic information flow ∆Î X instead of D S : where The inequalities (74) and (76) are the second main results of this paper. In Sec. IV A 1, we provide detailed proof of these inequalities. In Sec. IV A 2, we briefly discuss which of the two inequalities (74) and (76) gives a tighter bound on the negative entropy production rate. In Sec. IV A 3, we derive a trade-off relation in terms of power, i.e., output work produced per unit of time, instead of the negative entropy production. This relation can be regarded as a direct extension of the trade-offs for heat engines [21,22] to information-thermodynamic engines. (74) and (76) Here, we derive the trade-off relations (74) and (76) by using the bipartite TUR in the fast relaxation limit of Y (55), which can be rewritten as follows:
Derivation of
Let us choose stochastic medium entropy production associated with X as currentĴ T in (78).
which satisfies ⟨∆Ŝ X env ⟩ = ∆S X env . Then, we immediately obtain the trade-off between entropy production and efficiency (74).
Another type of the trade-off relation (76) can be derived by choosing the time-integrated stochastic information flow as currentĴ T in (78). Here, the instantaneous stochastic information flow is defined as the partial rate of change of the stochastic mutual information I(x t : y t ) := ln[p t (x t , y t )/p X t (x t )p Y t (y t )]: where t n denotes the time at which X jumps from x t − n to x t + n , and w xx ′ := y w y xx ′ p t (y|x ′ ) denotes the effective transition rate. In the steady state, the last two terms vanish so that the time-integrated stochastic information flow reads which satisfies ⟨∆Î X ⟩ = ∆I X . SubstitutingĴ T = ∆Î X in the bipartite TUR (78), we obtain the inequality (76). Note that we can also obtain a trade-off relation between the information flow and efficiency:
Tightness of the bounds
Here, we consider which of the two inequalities (74) and (76) gives a tighter bound on the negative entropy production rate. The difference between the two upper bounds reads Therefore, the bound (74) is tighter than (76) when D S /D I < η X S ≤ 1, while (76) becomes tighter than (74) when 0 ≤ η X S < D S /D I . Note that D S /D I may depend on η X S . In the linear response regime withṠ X env ≤ 0 anḋ I X ≤ 0, we can prove the input-output fluctuation inequality D S ≤ D I (for the derivation, see Sec. V B). Beyond the linear response regime, however, the inputoutput fluctuation inequality can be violated, i.e., D S can become larger than D I [26].
Trade-off between power and efficiency
While we have focused on the negative entropy production rate to characterize the performance of an information-thermodynamic engine, we can also derive a trade-off relation in terms of power, i.e., output work produced per unit of time. To define power, we assume that the transition rates satisfy the local detailed balance condition of the following form [17]: where β = (k B T ) −1 denotes the inverse temperature, ϵ xy denotes the energy of the state (x, y), and ∆ y xx ′ denotes the energy provided by an external agent during the transition (x ′ , y) → (x, y). Then, the average rate of heat absorbed by X from the environment is identified aṡ Similarly, the average rate of work done by the external agent to X is identified aṡ Finally, the average rate of change of internal energy readṡ If we regard x as an externally manipulated control parameter driving Y , thenĖ X can also be identified as the power delivered from X to Y [12,53]: Similarly, we can defineẆ Y ,Q Y , andẆ Y →X . Then, the first law of stochastic thermodynamics for each subsystem can be expressed as follows (in an averaged form): By using these relations, we can rewrite the second law of information thermodynamics in the steady state as Here, we have assumed that both X and Y are each in contact with a thermal bath at temperature T , while the extension to the case of different temperatures is straightforward [13]. Note thatẆ X→Y = −Ẇ Y →X anḋ I X = −İ Y in the steady state. Therefore,Ẇ X andẆ Y cannot both be negative. Now, suppose that X operates as an informationthermodynamic engine, i.e.,Ẇ Y > 0 andẆ X < 0. In this case, we can introduce the following efficiency: which satisfies 0 ≤ η X W ≤ 1, as can be seen from the second law of information thermodynamics. The denominator βẆ Y →X +İ Y = −βẆ X→Y −İ X ≥ 0 is called the transduced capacity [11,53], because it constraints the conversion of the input powerẆ Y into the output power |Ẇ X | as βẆ Y ≥ βẆ Y →X +İ Y ≥ |βẆ X |. The efficiency η X W quantifies how efficiently X converts the transduced capacity into the output power |Ẇ X |. Now we derive a trade-off relation between the output power and the efficiency η X W by using the bipartite TUR (78). Let us choose the stochastic work as currentĴ T in (78):Ĵ Then, the bipartite TUR gives where D W denotes the fluctuation of the output work defined by The inequality (95) states that an information engine with a finite output power cannot achieve η X W = 1 as long as the fluctuation D W is finite.
Although we have assumed that X evolves much slower than Y , there may be a situation where X measures the state of Y , i.e.,İ X > 0. Even in this case, we can also prove similar trade-off relations concerning the performance of the memory X. We first note that both the entropy production and the information flow associated with X are positive and satisfy the relation 0 <İ X ≤Ṡ X env . Then, we can introduce the following information-thermodynamic efficiency: which satisfies 0 ≤ η X I ≤ 1. In contrast to η X S , this efficiency quantifies how efficiently X gains information about Y relative to the energy dissipation or thermodynamic cost. Now we choose the time-integrated stochastic information flow as currentĴ T in the bipartite TUR (78). By noting the positivity ofṠ X env andİ X , the bipartite TUR gives the following inequality: This inequality states that a memory with a finite information flow can never attain η X I = 1 as long as D I is finite.
If we choose the stochastic entropy production as current,Ĵ T = ∆Ŝ X env , then we can obtain a similar trade-off relation where the information flow is bounded by the fluctuation of the stochastic entropy production D S instead of D I :İ
V. GALLAVOTTI-COHEN SYMMETRY AND INPUT-OUTPUT FLUCTUATION INEQUALITIES
In this section, we prove that the Gallavotti-Cohen symmetry [33][34][35] is satisfied in the fast relaxation limit of Y . As a consequence of this symmetry, we can further show that the input-output fluctuation inequalities hold in the linear response regime even in the presence of an information flow.
A. Gallavotti-Cohen symmetry
Let µ(λ S , λ I ) be the scaled cumulant generating function of the time-integrated currents ∆Ŝ X env and ∆Î X defined by where λ S and λ I are the counting fields for ∆Ŝ X env and ∆Î X , respectively. In this section, we prove that µ(λ S , λ I ) satisfies the following Gallavotti-Cohen symmetry in the fast relaxation limit of Y : To prove this, we first note that µ(λ S , λ I ) can be rewritten as where G T (x, y) denotes the generating function conditioned to a final state (x, y): where p T (x, y, ∆S X env , ∆I X ) denotes the joint probability density such that the state of the system at time T is (x, y) and the entropy production and information flow generated up to that time are ∆S X env and ∆I X , respectively. Therefore, the property of the scaled cumulant generating function µ(λ S , λ I ) is encoded in the property of the time evolution equation of the generating function G T (x, y). The time evolution equation of G T (x, y) can be obtained by noting that the time evolution equation of p T (x, y, ∆S X env , ∆I X ) reads where we have used the dimensionless slow time τ := T /τ X and dimensionless transition rates w y xx ′ := τ X w y xx ′ and w yy ′ x := τ Y w yy ′ x . Then, we find that the time evolution of G τ (x, y) is described by the following tilted dynamics: where L X λ S ,λ I and L Y λ S ,λ I denote the tilted generators given by We now assume that G τ has asymptotic expansions in terms of the asymptotic sequences {ϵ n } ∞ n=0 as ϵ → 0: Here, we impose the normalization condition By substituting this expansion into (105), we find that the leading order gives From the Perron-Frobenius theorem and the normaliza-tion condition, we find that G (0) τ has the form The subleading order of (105) yields From the solvability condition for G (1) τ , we obtain the effective dynamics for G X τ (x): where L X λ S ,λ I denotes the effective tilted generator given by Importantly, this effective tilted generator satisfies the following property: where ⊤ denotes the matrix transpose. Because the scaled cumulant generating function is equal to the largest eigenvalue of this effective tilted generator, the Gallavotti-Cohen symmetry follows from this property:
B. Input-output fluctuation inequalities
In the linear response regime, where the scaled cumulant generating function can be approximated by a quadratic form [17], the Gallavotti-Cohen symmetry (116) constrains its form as where a, b, c are constants. From the convexity of µ(λ S , λ I ), these coefficients satisfy a ≥ 0, c ≥ 0, and ac − b 2 /4 ≥ 0. By noting thaṫ these coefficients are further constrained by the second law of information thermodynamics to satisfy a + b + c ≥ 0.
1. Information-thermodynamic engine: We consider the case of −İ X ≥Ṡ X env , i.e., c ≥ a, which includes the case where X acts as an informationthermodynamic engine with 0 < −Ṡ X env ≤ −İ X . Since we have these relations between the coefficients a, b, c lead to the following input-output fluctuation inequalities: These inequalities state that the fluctuation of the output current (negative entropy production) is smaller than that of the input current (information flow), while the relative fluctuation of the output current is larger than that of the input current.
We can also derive input-output fluctuation inequalities when −İ X ≤Ṡ X env , i.e., c ≤ a, which includes the case where X plays the role of a memory with 0 <İ X ≤Ṡ X env . In this case, the information flowİ X corresponds to the output current while the entropy production rateṠ X env corresponds to the input current. Obviously, we have the following relations: (125)
VI. EXAMPLES
In this section, we illustrate our results, the trade-offs for information-thermodynamic engines and the inputoutput fluctuation inequalities, using two simple examples. The first example is coupled quantum dots, which is one of the simplest models of autonomous Maxwell's demon [45,54]. As a second example, we consider coupled linear overdamped Langevin equations, which ubiquitously appear in biological contexts with the linear noise approximation [5,[55][56][57]. Interestingly, the equality condition of the trade-offs (74) and (76) is satisfied even far from equilibrium in this case.
A. Coupled quantum dots
Model
We consider the system composed of two single-level quantum dots X and Y . Let x ∈ {0, 1} and y ∈ {0, 1} be occupation variables on each particle site, where x = 1 and y = 1 (x = 0 and y = 0) represent that the site of X and Y is filled (empty), respectively. The energy of X is ϵ X when it is filled with a particle and zero when it is empty. A single particle site of X exchanges particles with two particle reservoirs ν = L, R at temperature T and chemical potential µ ν . We assume that ∆µ := µ L − µ R > 0. Let p t (x, y) be the probability of state (x, y) at time t. The time evolution of p t (x, y) is described by the master equation: where x ′ := 1−x and y ′ := 1−y. Here, w (ν)y xx ′ denotes the time-independent transition rate from x ′ to x induced by the reservoir ν, which satisfies the local detailed balance condition: w (ν)y 10 w (ν)y 01 = exp(−β(ϵ X − µ ν )). (127) We suppose that the transition rates have the form where f ν := [exp(β(ϵ X −µ ν ))+1] −1 is the Fermi distribution function, and Γ X (Γ X ) denotes a positive coupling strength. Below, we focus on the case whereΓ X ≪ Γ X . The above form of transition rates implies that the coupling strength of the R(L)-reservoir changes from Γ X tõ Γ X when Y is filled (empty) with a particle. The transition rates associated with Y are given as follows: where Γ Y is a coupling strength, and ε can be interpreted as an error probability with 0 ≤ ε ≤ 1.
In this model, the subsystem Y acts as Maxwell's demon when ε is sufficiently small. To understand this point intuitively, let us consider the state of Y as representing the position of the wall, which is inserted between the single site of X and the reservoir. In other words, when y = 0 (y = 1), the wall is inserted between the site of X and the L (R) reservoir and prohibits the transition due to the L (R) reservoir by changing the coupling strength from Γ X toΓ X (see Fig. 2). As a result, particles are transferred from the R to L reservoirs against the chemical potential difference.
Fast relaxation limit of Y
Hereafter, we focus on the case where Y is faster than X, i.e., Γ Y ≫ Γ X ≫Γ X . By performing a perturbation expansion following Sec. III C, we can show that p t (x, y) ≃ p X t (x)π ss (y|x) with π ss (y = x|x) = 1 − ε, (133) π ss (y = 1 − x|x) = ε. (134) The effective dynamics for X is then given by where w (ν) xx ′ π ss (y|x ′ ) denotes the effective transition rates: Thus, in the fast relaxation limit of Y , the system X can be considered as an autonomous system where the coupling strength of the reservoirs changes autonomously. More specifically, when x = 1, the coupling strength of the R-reservoir changes from the original strength, Γ X , to a smaller value,Γ X , while that of the L-reservoir remains unchanged. In contrast, when x = 0, the coupling strength of the L-reservoir becomes small while that of the R-reservoir remains at the original strength Γ X . This autonomous control is probabilistic and has the error probability ε.
Trade-off between power and efficiency
We first consider the trade-off between the negative entropy production rate and information-thermodynamic efficiency (74). Note that (74) corresponds to the tradeoff between power and efficiency (95), becauseṠ X env = βẆ X in this model.
We first calculate the average rate of chemical work we note that b xx ′ µ ν corresponds to the energy provided by the particle reservoir ν during the transition (x ′ , y) → (x, y). Then, the average rate of chemical work readṡ where in the second line, we have used p ss (x ′ , y) ≃ π ss (y|x ′ )p X ss (x ′ ) in the fast relaxation limit of Y . In the last line, J X denotes the net particle current from L to R, which is conjugate with the chemical potential difference ∆µ: The net particle current J X becomes negative when ε is smaller than the critical value ε * , which can be evaluated as Note which follows from the condition ∆µ = µ L − µ R > 0. Similarly, the information flow can be expressed aṡ where in the second line, we have used p ss (x ′ , y) ≃ π ss (y|x ′ )p X ss (x ′ ) in the fast relaxation limit of Y . In the last line, F I denotes the information affinity defined as F I := ln π ss (0|0)π ss (1|1) π ss (0|1)π ss (1|0) and J I denotes the probability current that is conjugate with F I : Thus, the tight-coupling condition is satisfied in the limit Γ X /Γ X ≪ 1. Since ε * < 1/2, the information flowİ X also becomes negative when ε < ε * . The fluctuation of the chemical work can be calculated by considering the tilted dynamics (see Appendix C). The result reads where D n denotes the fluctuation of the net particle current: We now focus on the case of ε < ε * , where the system X acts as an information-thermodynamic engine withẆ X < 0. The corresponding informationthermodynamic efficiency reads where F X := β∆µ denotes the thermodynamic affinity conjugate with J X . The ε-dependence of the efficiency η X W and the output power |Ẇ X | is shown in Fig. 3(a) and (b), respectively. From this figure, we can see that the output power does not remain finite as η X W → 1. This result is consistent with the trade-off between power and information-thermodynamic efficiency (95) as illustrated in Fig. 3(b).
We next consider the trade-off relation where the negative entropy production is bounded by the fluctuation of the time-integrated stochastic information flow D I (76). In terms of the powerẆ X , it can be expressed as The fluctuation of the information flow D I can also be calculated by using the tilted dynamics as which satisfies β D W /D I = F X /F I ≃ η X W . Therefore, from (83), it follows that the upper bound of (150) is exactly the same as that of (95) for ε < ε * .
For comparison, we also plot the information flow and its upper bound (82) in Fig. 3(c). As in the case of the output power, the information flow also vanishes as the efficiency η X S (= η X W ) approaches 1. We note that |İ X | → ∞ as ε → 0 because the information affinity F I diverges.
Input-output fluctuation inequalities
We now consider the input-output fluctuation inequalities for ε < ε * , where the entropy production (Ṡ X env = βẆ X ) and the information flow correspond to the output and input currents, respectively. Since F X /F I ≤ 1, we can easily confirm that D S ≤ D I and D I /(İ X ) 2 = D S /(Ṡ X env ) 2 . Thus, the input-output fluctuation inequalities are satisfied even beyond the linear response regime in this model. Furthermore, the equality is achieved for the inequality regarding the relative fluctuations.
In the fast relaxation limit ϵ → 0, the joint probability density p τ (x, y) can be approximated as p τ (x, y) ≃ p X τ (x)π ss (y|x), where The resulting effective dynamics for X readṡ 3. Trade-off between negative entropy production and efficiency We first consider the trade-off between the negative entropy production and efficiency (74). Note that, unlike the previous example, this trade-off is not the same as the trade-off between power and efficiency (95) because there is no externally applied work in this system. In the steady state with the fast relaxation limit of Y , the entropy production rate associated with X readṡ where the symbol • denotes the Stratonovich product. We note thatṠ X env is induced by the fast variable y t , which does not appear in the effective dynamics for X (161). In other words,Ṡ X env is an entropy production invisible from the effective dynamics, which is called hidden entropy [62,63]. Similarly, the information flow can be calculated aṡ In the context of the Brownian gyrator, we can show that there is a torque, which remains finite even in the fast relaxation limit of Y . Both the medium entropy production rateṠ X env and the information flowİ X are proportional to this "hidden" torque. The fluctuation of the entropy production can be calculated by considering the tilted dynamics. The result reads (see Appendix D for the derivation) We now focus on the case where ω XY ω Y X > 0 and ω XY D Y < ω Y X D X . In this case, both the entropy production rate and information flow become negative, i.e., X acts as an information-thermodynamic engine. Then, the corresponding information-thermodynamic efficiency is given by Combining (165) and (164), we find that the upper bound on the negative entropy production rate (74) is Thus, the equality condition is satisfied even far from equilibrium in this case. This is in contrast to the standard long-time TUR, where the equality is guaranteed only in the near-equilibrium limit. We next consider the trade-off relation where the negative entropy production is bounded by the fluctuation of the time-integrated stochastic information flow D I (76). The fluctuation of the information flow D I can also be calculated by using the tilted dynamics as which satisfies D S /D I = η X S . Therefore, from (83), it follows that the upper bound of (76) is exactly the same as that of (74). This implies that the trade-off between the information flow and efficiency (82) also achieves the equality in this case: We now consider the possibility of achieving finite negative entropy production even when η X S → 1. We first note that the negative entropy production can be expressed in terms of η X S as Since 0 <ω XYωY X < 1, we find that |Ṡ X env | → 0 as η X S → 1 as long as ω X is finite. In contrast, if ω X is scaled as ω X = ω 0 /(1 − η X S ), the negative entropy production can remain finite even in the limit η X S → 1: As can be seen from the trade-off relation (166), the fluctuation of entropy production blows up as η X S → 1 in this case (see Fig. 4(a)): Similarly, the information flow can also remain finite in the limit η X S → 1 at the expense of the blow-up of the fluctuation of information flow as η X S → 1 (see Fig. 4(b)):
Equality condition of bipartite TUR for this model
Here, we discuss the reason why the equality of the trade-offs (166) and (168) is achieved in this model. We first recall that these trade-offs are special cases of the bipartite TUR in the fast relaxation limit of Y (55). In this model, the time-integrated generalized currentĴ T for the subsystem X can be expressed aŝ with an arbitrary weight function g(x, y). As in Sec. III D, the current can be decomposed asĴ T =Ĵ I T +Ĵ II T witĥ where the symbol · denotes the Ito product, W X t denotes the Wiener process, and We now show that this model satisfies the sufficient condition for the bipartite TUR in the fast relaxation limit of Y (55) to hold with equality, described in Sec. III D. First, the weight of the current should be proportional to that of the partial entropy production: The time-integrated stochastic information flow ∆Î X in the steady state with the fast relaxation limit of Y is an example that satisfies this condition: Second, for this choice of the current, the fluctuation of J II T must go to zero in the fast relaxation limit of Y : In this model, we can confirm that this condition is indeed satisfied by explicitly calculating D II J (see Appendix D 2). As a result, the equality of (55) is achieved for a current that satisfies the condition (178) in the fast relaxation limit of Y : Note that the condition described above is only a sufficient condition. In fact, the equality (181) holds for more diverse types of currents that do not even satisfy the condition (178) in this model. To see this, note that the currentĴ T that satisfies the condition (178) can be expressed aŝ The important point here is that the second term is a boundary term and can be ignored when considering the long-time statistical properties ofĴ T . (For the effect of such a boundary term on the large deviation, see [64].) Therefore, any currentĴ T that has the same long-time statistical properties as (182) satisfies the equality (181). The example includes the stochastic medium entropy production ∆Ŝ X env : . (183) Hence, the choiceĴ T = ∆Ŝ X env also satisfies the equality (181), although it does not satisfy the condition (178). Indeed, we can show that the following relation holds: Here, D I S := lim T →∞ Var[Ĵ I T ]/2T denotes the fluctuation ofĴ I T withĴ T = ∆Ŝ X env , which is given by
Input-output fluctuation inequalities
We finally consider the input-output fluctuation inequalities for the case of ω XY ω Y X > 0 and ω XY D Y < ω Y X D X , where the entropy production and the information flow correspond to the output and input currents, respectively. From the relation D S /D I = η X S , it immediately follows that D S ≤ D I and D I /(İ X ) 2 = D S /(Ṡ X env ) 2 . Thus, as in the previous example, the input-output fluctuation inequalities are satisfied even beyond the linear response regime in this model, and the equality is achieved for the inequality regarding the relative fluctuations.
VII. CONCLUDING REMARKS
In this paper, we have obtained several fundamental limits for information processing systems. Specifically, we have derived a TUR-type inequality for bipartite systems that provides a universal lower bound on the relative fluctuation of an arbitrary current for a system of interest by the associated partial entropy production, which includes the information flow. This bipartite TUR includes the standard TUR as a special case and incorporates the effect of the interaction with external auxiliary systems. As a corollary to this inequality, we have derived universal trade-off relations between the negative entropy production rate and the informationthermodynamic efficiency, which can be regarded as an extension of the trade-offs for heat engines [21,22] to information-thermodynamic engines. Furthermore, in the fast relaxation limit of the auxiliary system, we have shown that the Gallavotti-Cohen symmetry holds even in the presence of information flow. From this symmetry, we can show that the input-output fluctuation inequalities are also valid for information processing systems. We have illustrated our results with two simple examples: coupled quantum dots and coupled linear overdamped Langevin equations. In particular, we have seen that the latter provides an example where the equality of the bipartite TUR is achieved even far from equilibrium.
Here, we provide some remarks on previous studies related to our results. We first note that the bipartite TUR in the short time limit T → 0 is already proved in [52] using the Cauchy-Schwarz inequality. Our first main result (26) can be regarded as an extension of the short-time bipartite TUR to an arbitrary observation time T . TUR-type inequalities including measurement and feedback are also derived from fluctuation theorems in [65,66]. While these relations include a contribution of information induced by measurement and feedback processes, this contribution appears in the form of total entropy production rather than partial entropy production. Therefore, our bipartite TUR can provide more stringent bounds on the precision of currents under measurement and feedback control. The standard TUR has also been discussed as a tool for inferring entropy production [52,[67][68][69][70][71]. In this context, the bipartite TUR proved here may provide a promising approach to estimating a partial entropy production, especially an information flow.
Next, we remark on the range of validity of the bipartite TUR. While here we have presented the bipartite TUR in the steady state, this relation is valid even for systems under arbitrary time-dependent driving from arbitrary initial states. In Appendix A, we provide a proof of the bipartite TUR in a general form for the case of overdamped Langevin equations. It should also be noted that the bipartite TUR is generally not valid for systems with broken time-reversal symmetry, such as underdamped Langevin dynamics [37][38][39][40][41][42][43], as in the standard TUR. However, many relevant biological systems are often described by continuous-time Markov jump processes or diffusion processes with only even variables and parameters under time reversal. Therefore, the results described in this paper will be applicable to a wide range of systems, including biological systems.
In this study, we have focused mainly on the case where an auxiliary system evolves much faster than the system of interest. Such a separation of time scales allows the dynamics of a composite system to be reduced to the effective dynamics of the system of interest, and thus various universal relations similar to those found for a single system hold. While we expect such a separation of time scales to be ubiquitous in biological systems, extending our results to cases where there is no clear time-scale separation would be important for elucidating the design principles of biological systems.
∂ t ′ p(x, t|x ′ , t ′ ) = −∂ t p(x, t|x ′ , t ′ ), and thus Hence, by integrating by parts, we obtain Appendix C: Coupled quantum dots In this section, we provide a detailed calculation of the fluctuation of the chemical work D W in the fast relaxation limit of Y for the coupled quantum dots introduced in Sec. VI A. The fluctuation of the information flow D I can be calculated in a similar way. The stochastic chemical work is defined as where b xx ′ := 1 (x = 1, x ′ = 0), −1 (x = 0, x ′ = 1). (C2) Then, the fluctuation of the stochastic chemical work is defined as The fluctuation D W can be obtained from the scaled cumulant generating function defined by As described in Sec. V A, the scaled cumulant generating function can be calculated by considering the generating function conditioned to a final state (x, y): G T (x, y) := d∆W X p T (x, y, ∆W X )e λ∆W X . (C5) The time evolution of G T (x, y) reads where L X λ and L Y λ denote the tilted generators given by where we have used the dimensionless slow time τ := Γ X T and dimensionless transition rates w (ν)y xx ′ := w (ν)y xx ′ /Γ X and w yy ′ x := w yy ′ x /Γ Y with a small parameter ϵ := Γ X /Γ Y ≪ 1 (do not confuse ϵ with the error probability ε).
Since we are interested in the fast relaxation limit of Y , we can consider the effective tilted dynamics for G X τ := y G τ . By performing a perturbation expansion as in Sec. V A, we obtain where L X λ denotes the effective tilted generator given by which can be expressed as where we have introduced the effective transition rate w Then, the second derivative of θ max gives the fluctuation D W : where D n denotes the fluctuation of the net particle current: 10 + w g(x, y) := 1 D X (−x +ω XY y).
The fluctuation D S can be obtained from the scaled cumulant generating function defined by To compute the scaled cumulant generating function, we introduce the generating function conditioned to an initial state (x 0 , y 0 ) = (x, y), defined by G T (x, y) := ⟨e λ∆Ŝ X env |x, y⟩.
The time evolution of G T is described by the Feynman-Kac formula [78]: where L † λ denotes the tilted generator defined by whereF X (x, y) := −x+ω XY y andF Y (x, y) :=ω Y X x−y denote the dimensionless drift terms. The largest eigenvalue of this tilted generator gives the scaled cumulant generating function.
Since we are interested in the fast relaxation limit of Y , we can further simplify the problem by considering the effective tilted generator for X, as follows. We first assume that G τ have asymptotic expansions in terms of the asymptotic sequences {ϵ n } ∞ n=0 as ϵ → 0: Here, we impose the normalization condition dyπ ss (y|x)G (0) τ (x, y) = dyπ ss (y|x)G τ (x, y) where π ss denotes the zero-eigenfunction for L Y 0 . By substituting this expansion into (D6), we find that the lead-ing order gives Since L Y † λ = L Y † 0 , the zero-eigenfunction for L Y † λ is 1. From the Perron-Frobenius theorem and the normalization condition, we find that G f (x τ , y τ )dτ, where f (x, y) := C(y −ω Y X x)(−x +ω XY y) The fluctuation D II J can be obtained from the following scaled cumulant generating function, which corresponds to the largest eigenvalue of the following tilted generator [78]: The effective tilted generator is then given by L X † λ := dyπ ss (y|x)L X † λ = −(1 −ω XYωY X )x∂ x +D X ∂ 2 x + λC −ω Y XDX +ω XYDY .
By performing a similar calculation as in the previous section, we finally obtain Thus, we find that D II J := lim T →∞ Var[Ĵ II T ]/2T = 0. | 14,821.8 | 2023-05-30T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Detection of Foreign Matter in Transfusion Solution Based on Gaussian Background Modeling and an Optimized BP Neural Network
This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory.
Introduction
Medical transfusion is one of the important preparations of the pharmaceutical industry in China. At present, the transfusion packaging market mainly consists of plastic bottles, glass bottles and soft bags. The sales of plastic bottles have increased quickly, because of their advantages of good chemistry stability, light weight and lesser chance of being contaminated. However, some visible foreign matter may mix in the transfusion solution during the process of producing, filling and packaging. This foreign matter can cause serious diseases [1], such as tumors, phlebitis, anaphylactic reaction or even death, when they are injected into the blood of patients. Thus, the transfusion product must undergo a rigorous inspection before entering the market. However, due to a lack of pivotal technologies, most of the pharmaceutical manufacturers in China have adopted the light inspection method. In a dark room, the trained inspectors reverse a transfusion container illuminated by a bright lamp to check whether visible foreign matter exists and to decide if the product is qualified or not subjectively. The method is simple, but the judgment lacks a rigorous and objective evaluation criteria. Therefore, it has low accuracy and repeatability. A visual inspection system for foreign matter in injection products was developed early [2]. Additionally, a lot research has been done, and many methods [3][4][5] have been proposed. Most of the research is aimed at ampoules or vials. They are rotated at a high speed and are stopped abruptly. The solution in the container forms a vortex due to inertia. The foreign matter moving with the solution can be distinguished from the stationary background on the surface of the ampoule. However, an ampoule or a vial is small, and the shape of is standard circular. Additionally, it has a smooth, highly transparent surface. Therefore, it is easy to gather the foreign matter in the center of the container and to distinguish it from the simple background. In this paper, the detected object is a 250-mL plastic container filled with sodium chloride injection. The transparency of a plastic bottle is lower compared to a glass one, and the surface of it is easily formed into a large area for the diffuse reflection region. As the cross-sections of the bottle are elliptical, they have a complicated surface, such as embossed symbols and graduations on the surface and paper labels, which illustrates the usage and dosage, product batch number and other information. All of these characteristics described above increase the difficulty of detecting and identifying foreign matter in this complex circumstance.
The detection of foreign matter mixed in a plastic bottle of medicinal solution using real-time video image processing is proposed in [6], but this method cannot distinguish foreign matter and bubbles effectively; it just tries to obtain high contrast images between foreign matter and bubbles by adjusting the positions, angles and the exposure time of the light source. A method has been proposed in [7], which use mean shift to track only falling moving objects in five frames as foreign matter. This method may have a high misdetection rate for some lighter density foreign matter, such as floating debris, which is circling with the solution, occasionally rising, occasionally falling, and may be missed. Moghadas et al. [8] proposed a non-rotating method, which was based on stereo vision, to acquire moving objects and used a Multi-layer Perceptron (MLP) neural network to distinguish foreign matter and bubbles in a vial. However, this method may be confused by spurious light and the instability of the light source. A method based on the frame difference method and the modified pulse-coupled neural networks segmentation method [9] is used in the process of moving object detection. The difference method is commonly used, but this method weakens the energy of foreign matter and is sensitive to noise and spurious light, which may extract interference points in a poor quality image. Therefore, it has high requirements for subsequent segmentation algorithms.
In this paper, Gaussian background modeling is used to extract moving objects. It is a good method for complex background modeling. It models each pixel according to the gray value and can separate the moving objects and static background more effectively, even for poor quality image, compared with the difference method. After that, a set of features is extracted and selected for each moving object by the ReliefF algorithm. Then BP neural network is used as a classifier to train the features to distinguish foreign matter and bubbles. The BP neural network is one of the most widely used neural network models. It can learn and store a large amount of input-output models mapping relationships. However, the BP algorithm has the shortcomings of slow convergence, low accuracy and easily to falls into the local minimum solution. Therefore, the mind evolutionary algorithm (MEA) is applied to optimize the BP neural network. MEA is a global optimization approach. It improves search efficiency and solves the earliness and slow convergence speed in BP successfully. The experimental results show that our method can detect and identify visible foreign matter effectively with satisfactory speed and accuracy.
This article is organized as follows. In Section 2, we describe the system framework and illumination style. In Section 3, we demonstrate the key algorithm of detection and identification. The Gaussian background modeling method is applied to detect foreign matter firstly, and it can accurately extract moving objects while suppressing background noise. In the detection process, bubbles are a significant external interference for foreign matter. Thus, we apply the BP neural network to train corresponding features to distinguish between foreign matter and bubbles. To reduce the computation of the system, optimal feature vectors are obtained by the ReliefF features selection algorithm. The BP algorithm optimized by MEA can have a higher classification accuracy and faster convergence rate. In Section 4, we present the experimental results and analysis. Finally, conclusions are drawn in Section 5.
System Framework
When the transfusion bottle is still, the foreign matter is often down in the bottom or adhered to the side walls of the bottle. In order to obtain high contrast images, firstly the transfusion bottle starts to rotate, driven by the motor at 7 r/s speed, and then, it decelerates to 3 r/s to produce a stationary vortex and is stopped suddenly.
To reduce the interference of bubbles, the camera and light source are activated simultaneously after the bubbles rise to the liquid surface (about 0.7 s) to acquire a sequence of foreign matter images. At last, the images are processed by the image process unit to judge whether the product contains foreign matter or not. The framework of the detection system for a plastic transfusion bottle is shown in Figure 1.
Illumination Style
To enhance the contrast between the moving foreign matter and the static background, a proper mixed illumination style is designed. Two bar-shaped LED lights are mounted at the left and right sides of the transfusion bottle, diagonally above 45 degrees, the center line of the two cluster beams intersects at the center of the transfusion bottle. This method produces a uniform stable light background and avoids diffuse reflectance when light is shining directly on the side walls of the bottle. Another round condensing LED light is installed above the bottle to enhance the reflection of foreign matter. The whole transfusion bottle is illuminated uniformly; the foreign matter image is black on the CCD camera because of its obstruction of the transmitted light. The refraction light is reflected by the white matter images on the CCD. The mixed illumination style can obtain a high contrast images of foreign matter. In the experiment, the red LED light is used because of its sensitivity to tiny foreign matter in the solution.
Key Algorithm of Detection and Identification
A transfusion bottle has a complicated surface, such as scales, scratches and embossed labels on the surface, which makes it difficult to distinguish foreign matter and an uneven surface. In the experiment, the bottle's complicated surface, system random noise, the instability of light source and the dithering of mechanical devices form a complex background. To achieve the detection of foreign matter, a Gaussian mixture model is applied to model a transfusion image. It models each pixel according to the gray value and updates the model parameters to separate a moving object and the static background. In the detection process, bubbles are a significant external interference for foreign matter. Thus, after the detection, a set of features is extracted for each moving object and selected by the ReliefF algorithm. The ReliefF algorithm is applied to assign corresponding weights for each feature according to its contribution to the classification performance. At last, the BP neural network is used as a classifier to train the extracted features to achieve classification and identification between foreign matter and bubbles. To improve the speed of convergence and to increase the accuracy of classification, MEA is applied to optimize the connection weights and thresholds of the BP neural network. In the experiment, rubber, hairs and floating debris as foreign matter are distinguished from bubbles.
The detection and identification flowchart of foreign matter in transfusion is shown in Figure 2.
Moving Object Detection in Transfusion
In a transfusion image with foreign matter, the gray value changes of the background are relatively small without the occurrence of foreign matter. When black foreign matter or white matter appears at a pixel in the image, the gray value plunges or surges, as shown in Figure 3. When black foreign matter appears at the position [136,495] in Figure 3a, in the thirteenth frame image of a sequence of 100 images, the gray value plunges to 50, as shown in Figure 3b, while at the position [171,636] in Figure 3a, without the occurrence of foreign matter, the variation is less than 15, because of the disturbance by the light source, as shown in Figure 3c. Gaussian background modeling [10][11][12] distinguishes a moving target from the background based on the gray variation at the same position.
(1) Based on the above analysis, we use K = 3 Gaussian models to model every pixel in the transfusion image. At time t, the probability of observing a background pixel t X is the weighted sum of three Gaussian distributions: The probability density function of the i-th Gaussian model is: is the weight, mean value and the corresponding covariance matrix of the i-th Gaussian distribution.
(2) Gaussian model parameters update: When a new image is obtained, for every pixel Xt, using it to match with the already existing K Gaussian distributions, if a match is found, Xt is used to update the mean and variance of the model and to increase the value of the weights of the model. If no match is found, Xt is used to generate a new Gaussian distribution model to replace the already existing model with the smallest weight value.
The criterion of a match is used as: where ϕ has an experiential value of 2.5.
(3) The judgment of foreground and background: If a pixel Xt is of the background, the Gaussian distribution corresponding to Xt has a smaller variance and a larger weight value. Each Gaussian model is sorted by descending order according to: The background distribution is always on the top of the K distributions, and the front n distributions represent the background. 1 arg min The parameter n is the optimal distribution of the quantities. In this paper, the value of TR is 0.25. The threshold value TR represents the background distribution of weights and accounts for the smallest proportion overall. If TR is small enough, then the background model becomes single-mode, and the distribution of the background can save the amount of calculation. If TR is relatively large, then the hybrid model can accommodate the disturbance and noise of the background with a large amount of calculation. Considering the processing effects and the amount of calculation, this parameter TR in this paper is chosen as 0.25.
Feature Extraction of Moving Objects
For the residual images after the Gaussian background modeling, every connected domain is labeled from one to k and corresponds to a moving target. For further classification and identification of moving targets, a set of features are extracted and are fed into the BP neural network. For the tiny moving objects in a transfusion image, which has a low signal-to-noise ratio, its available features are very limited: just a few shape features, gray values and its statistical properties. The extracted features between different classes should have a great difference, and it should have little change for transformations, such as motions, rotations and translations in sequential frames for the same moving object. Based on the above analysis, a feature set is described.
Two types of region descriptors computed from the connected region have been used: the geometric invariants and the gray feature descriptors.
(1) The geometric invariants descriptors: The Hu [13] invariants are good descriptors for moving objects in a sequential images, since they are invariant to rotation, scaling and translation. Those invariants can be seen as nonlinear combinations of complex geometric moments: (2) The gray feature descriptors: When foreign matter in the solution is illuminated from above, the gray feature is a good measure for four classes of moving objects. For opaque substances, like hairs and rubber, the gray values are relatively low. The transparent bubbles have a higher gray value. Therefore, a gray feature set based on the statistical properties of the gray histogram is extracted.
The image histogram reflects the image gray level distribution, and the expression is: where k is a variable representing the gray level in an image and N is the total number of pixels in a gray image. Additionally, nk is the number of pixels of each gray level. The parameter L is the max number of gray levels.
The feature set has fifteen features as follows (Table 1):
Feature Selection by the ReliefF Algorithm
The ReliefF [14][15][16] algorithm is a typical feature selection algorithm with high efficiency. When dealing with multi-class problems, ReliefF randomly selects an instance i R and then searches for k of its nearest neighbors from the same class j H and also k nearest neighbors from the different class j M .
It updates the quality estimation weight Ultimately, the features with greater weights serve as effective features for the classification and recognition.
BP Algorithm Optimized by MEA
The BP neural network is used as classifier to train the features to achieve classification and recognition between foreign matter and bubbles. The BP neural network is one of the most widely used neural network models. The BP algorithm has the shortcomings of slow convergence, low accuracy and easily falls into the local minimum solution.
MEA [17][18][19] is a global optimization approach. It replaces crossover and mutation in the genetic algorithm (GA) with similar taxis and dissimilation, respectively. The colony is separated into some groups, and individuals exchange information, study mutually and compete locally; then, groups also exchange information, study mutually and compete globally. This algorithm improves search efficiency and solves the earliness and slow convergence speed in BP successfully. Thus, we applied the MEA to optimize the connection weights and thresholds of the BP neural network. The flow chart of the BP algorithm optimized by MEA is shown in Figure 4.
Experimental Results and Analysis
We capture five consecutive frames of 250-mL transfusion images using a mixed illumination as mentioned above and compare the results using the Gaussian background model method and the traditional inter-frame difference method, which is shown in Figure 5. In the difference process, the energy of the moving objects is weakened, which makes it difficult to see the moving objects clearly. In order to highlight the contours of moving objects, the Canny operator is used to extract edges in the difference images, which is shown in Figure 5(g1-g5). The detection results with the Gaussian background model are shown in Figure 5(h1-h5). Optimal Gaussian model parameters are given in experiments: the number of the Gaussian distributions k is tree; the initial variance is 15; the learning rate α is 0.05; the variance threshold ϕ is 2.5; the foreground threshold R T is 0.25.
Experiment 1: Foreign Matter Detection in the Liquid Region
Due to the filtering process, the edges of Figure 5(c1-c5) are blurred, and the results of Figure 5(g1-g5) and Figure 5(h1-h5) seem to be expanded, compared with the original images.
Two experiments are implemented according to the different positions where the foreign matter occurs. When foreign matter is suspended in the liquid, the foreign matter image is clear. The Canny edge detection method after inter-frame difference and the Gaussian background model method both can detect the foreign matter effectively, as shown in Figure 5.
Experiment 2: Foreign Matter Detection in the Reflective Regions of the Plastic Bottle
When foreign matter appears in the reflective regions in the bottom of the bottle, the visible foreign matter is difficult to detect after the inter-frame difference method, as shown in Figure 6(d1-d4), because its energy is weakened deeply and it is easily drowned out by the complex background. Edge detection results Figure 6(e1-e4) shows that background noise and moving objects remain in the difference images. The Gaussian background modeling method can extract the moving objects accurately and suppress background noises effectively, as shown in Figure 6(g1-g4). The traditional frame difference method is simple and fast, but in the difference process, it would seriously weaken the energy of moving objects. For the foreign matter in the transfusion solution, it is tiny and has low energy, and the results of the difference method make it easier to be submerged in the interference and noise, especially in a bad quality image. For the use of the difference method, the subsequent processing algorithms need to enhance the energy of moving objects. A very complex segmentation method, such as modified pulse-coupled neural networks [9], need to be used to extract foreign matter, which increases the complexity of the algorithm.
The Gaussian background modeling method can update the background model in a timely manner based on the variances of the gray value in the image. It can be capable of dealing with shadows and lighting changes, slow-moving objects and objects being introduced or removed from the scene. It can perfectly extract the moving objects with its energy unchanged and does not need a subsequent complex segmentation algorithm. Traditional difference methods typically fail in these general situations. Furthermore, Gaussian background modeling is a robust, effective method for the detection of foreign matter in the transfusion solution.
Experiment 3: ReliefF Algorithm Features Selection
Proficient workers selected 300 bottles of transfusion products with typical foreign matter and 100 bottles of transfusion products without foreign matter, which are used to obtain the bubble samples. Each transfusion product is checked, and five images are captured in our experimental platform. After that, all of the images are processed with our algorithm, and all moving objects as samples are separated out. We pick out samples with various sizes and morphologies as the training set. We randomly select samples as the test set from the remaining samples.
In the experiment, fifteen features of ten training sets are used to test the ReliefF algorithm. Each training set has N = 200 samples, which includes four types of the objects of interest, rubber, hairs, floating debris and bubbles. Each instance searches for k = 8 hits and misses, and the entire algorithm is repeated m = 20 times to update the quality estimation weight. The final weight distribution result is the average result of 20 iterations. The final results showed a strong similarity, and the weights of each feature have little change, as shown in Figure 7.
In Figure 7a shows the weight distribution of fifteen features in one set. Figure 7b shows the weight distribution of fifteen features in ten sets.
The ReliefF algorithm assigns larger weights for the seventh, eighth and ninth features. Namely, the first, second and third components of Hu moments play a more important role in the classification process. While the ReliefF algorithm assigns smaller weights for others features, several are less than 0.05, which illustrates that these features are basically redundant in the classification process. Since the ReliefF algorithm cannot remove redundant features, the BP algorithm is used to test these features to remove redundant features and to obtain the optimal feature vectors.
Experiment 4: BP Algorithm Test for Optimal Feature Dimensionality
In the test of classification accuracy, each feature is added as the weight value in descending order as shown in Figure 7. That is, the feature numbers, 7, 8, 9, 2, 11, 6, 13, 10, 1, 4, 12, 3, 5, 14 and 15, are successively added and tested. The subspace feature dimension goes from one to 15, and the BP algorithm is applied to classify the moving objects. The BP algorithm uses a three-layer neural network, and the number of hidden neurons is set to five. A training set and a test set are used, and they respectively have 300 and 200 samples. The classification accuracy is shown in Figure 8 σ and entropy e. Figure 9 represents the spatial distribution of moving objects in the best three-dimensional subspaces obtained with the feature extraction. All four classes of objects can be easily distinguished. It is clear that the distribution for rubber in the three-dimensional subspaces 1 2 3 , , H H H is closest, because its shape is stable in the transfusion solution. While for the hairs, floating debris and bubbles, the shapes are easily changed during the process of moving with the solution, so the spatial distribution is relatively discrete, as shown in Figure 9.
Experiment 5: Test for BP Algorithm Optimized by MEA
In the experiment, MEA is used to optimize the connection weights and thresholds for the BP algorithm to obtain a faster convergence speed.
For the BP algorithm, the choice of the number of hidden neurons is important. If the number is too small, it cannot achieve high classification and recognition accuracy. If the number is too large, the training time will increase and the network will be over-trained. To find out the best testing accuracy, the number of hidden neurons of the BP neural network is tested in Figure 10.
As observed from Figure 10, the best performance obtained by the BP algorithm is 95.68% when the number of hidden neurons is 10, while the best performance obtained by the BP algorithm optimized by MEA is 98.76% when the number is five.
Thus, the BP algorithm optimized by MEA can obtain better performance than BP with fewer numbers of hidden neurons. It can also be seen that the standard deviation of the generalization performance of BP optimized by MEA is much smaller than for BP, meaning that it may run in a much more stable manner than BP with a faster convergence speed. The relationship between the spent learning time and the number of neurons for BP optimized by MEA and BP is shown in Figure 11. As shown in Figure 11, the BP algorithm optimized by MEA greatly reduces the training time corresponding to the BP algorithm.
Experiment 6: Test for the Overall Foreign Matter Detection System
We selected 200 bottles of sodium chloride injection to test our detection system, among which 150 samples have typical foreign matter and 50 samples are without foreign matter. To describe the overall system performances, the classification rate for each class and how each of the classes is classified with respect to the others are shown with a confusion matrix in Table 2. Each row i of the confusion matrix corresponds to the class of the test sample; each column j corresponds to the class 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Numbers of Neurons
Training Time(s) BP BP+MEA decided by the system; and each cell in the table gives the percentage rate of one object of class i being classified as class j . An ideal system would have a diagonal confusion matrix with 100% for each term of the diagonal.
The misdetection rate and false alarm rate are calculated when unqualified samples are judged as qualified samples and qualified samples are judged as unqualified samples. The misclassification rate is defined as the unqualified samples with foreign matter that are detected accurately, but misclassified. As shown in the confusion matrix of the detection system, the overall detection system misdetection rate is 1.096% and the false alarm rate is 1.250%. The unqualified samples with three classes of foreign matter are detected at 97.833%, 98.714% and 97.64%, respectively. The qualified sample detection accuracy is 98.75%. The misdetection rate and the false alarm rate are low, and this detection accuracy is satisfactory.
The reasons for misdetection and false alarms are: 1. In a qualified sample, the changes of brightness in bubbles may be falsely detected as unqualified samples with foreign matter.
2. In the process of moving object detection, the objects are easily segmented incompletely; one object broke into several blobs, which confused the BP classifier when judging the class to which it belongs. The segmentation incompleteness may cause the misdetection error and misclassification error, as well as the false alarm.
Conclusions
This paper mainly describes a new method for the detection and identification of foreign matter in transfusion solution. To avoid large reflective regions on the sidewalls of the plastic bottle, a mixed illumination style is proposed to acquire high contrast images between foreign matter and the background. For the complex background in a transfusion image, which involves the unevenness in the bottle surface, system random noises and the interference of light source instability, Gaussian background modeling is capable of extracting moving objects perfectly compared with the traditional frame difference methods.
Fifteen gray and shape features are extracted for moving objects and are assigned appropriate weights by the ReliefF algorithm according to their contributions to the classification performances. Since the ReliefF algorithm cannot remove redundant features, the BP algorithm test subspace feature dimensions go from one to 15 as the weight values in descending order, and finally, six optimal feature vectors are obtained. The BP algorithm optimized by MEA is able to achieve higher classification accuracy and faster speed for the classification and identification of foreign matter.
The overall system unqualified product detection accuracy is 98.904%, and qualified product detection accuracy is 98.75%, which was tested by many experiments. The misdetection and false alarms are mainly due to the interference of the bubbles. Future work will mainly design more appropriate mechanisms to reduce the production of bubbles and will apply more effective features to distinguish between bubbles and foreign matter. | 6,463.8 | 2014-10-24T00:00:00.000 | [
"Computer Science"
] |
Carbonic Anhydrase II Increases the Activity of the Human Electrogenic Na /HCO3 Cotransporter*
Several acid/base-coupled membrane transporters, such as the electrogenic sodium-bicarbonate cotransporter (NBCe1), have been shown to bind to different carbonic anhydrase isoforms to create a “transport metabolon.” We have expressed NBCe1 derived fromhuman kidney in oocytes ofXenopus leavis and determined its transport activity by recording the membrane current in voltage clamp, and the cytosolic H and Na concentrations using ion-selective microelectrodes. When carbonic anhydrase isoform II (CAII) had been injected into oocytes, the membrane current and the rate of cytosolic Na rise, indicative for NBCe1 activity, increased significantly with the amount of injected CAII (2–200 ng). The CAII inhibitor ethoxyzolamide reversed the effects ofCAII on theNBCe1activity. Co-expressing wild-type CAII or NH2-terminal mutant CAII together withNBCe1 provided similar results, whereas coexpressing the catalytically inactive CAII mutant V143Y had no effect on NBCe1 activity. Mass spectrometric analysis and the rate of cytosolic H change following addition of CO2/HCO3 confirmed the catalytic activity of injected and expressed CAII in oocytes. Our results show that the transport capacity of NBCe1 is enhanced by the catalytic activity of CAII, in line with the notion that CAII forms a transport metabolon with NBCe1.
Several acid/base-coupled membrane transporters, such as the electrogenic sodium-bicarbonate cotransporter (NBCe1), have been shown to bind to different carbonic anhydrase isoforms to create a "transport metabolon." We have expressed NBCe1 derived from human kidney in oocytes of Xenopus leavis and determined its transport activity by recording the membrane current in voltage clamp, and the cytosolic H ؉ and Na ؉ concentrations using ion-selective microelectrodes. When carbonic anhydrase isoform II (CAII) had been injected into oocytes, the membrane current and the rate of cytosolic Na ؉ rise, indicative for NBCe1 activity, increased significantly with the amount of injected CAII (2-200 ng). The CAII inhibitor ethoxyzolamide reversed the effects of CAII on the NBCe1 activity. Co-expressing wild-type CAII or NH 2 -terminal mutant CAII together with NBCe1 provided similar results, whereas coexpressing the catalytically inactive CAII mutant V143Y had no effect on NBCe1 activity. Mass spectrometric analysis and the rate of cytosolic H ؉ change following addition of CO 2 /HCO 3 ؊ confirmed the catalytic activity of injected and expressed CAII in oocytes. Our results show that the transport capacity of NBCe1 is enhanced by the catalytic activity of CAII, in line with the notion that CAII forms a transport metabolon with NBCe1.
Cytosolic proton buffering and proton regulation are essential properties of all cells, and regulation of H ϩ is mainly attributable to Na ϩ and/or H ϩ /HCO 3 Ϫ -dependent carriers, which can recover intracellular and extracellular pH from an acidification or alkalinization. These transporters include the SLC4 family of HCO 3 Ϫ transporters, divided into three Cl Ϫ /HCO 3 Ϫ exchangers (AE1-3) 2 and five Na ϩ -coupled HCO 3 Ϫ transporters (NBCe1, NBCe2, NBCn1, NDCBE, and NCBE) and the family of Na ϩ /H ϩ exchangers (NHEs) consisting of 9 known isoforms, NHE1-NHE9 (SLC9). The electrogenic sodium-bicarbonate cotransporter (NBCe1) is found in many epithelial cells, such as renal proximal tubule cells (1)(2)(3) and glial cells (4 -7). One main source of intracellular HCO 3 Ϫ that serves as substrate for the NBC is CO 2 , the hydration and dehydration of which is catalyzed by the enzyme carbonic anhydrase (CA).
After the report on the binding of carbonic anhydrase isoform II (CAII) to the carboxyl terminus of the human anion exchanger AE family (8), evidence has accumulated for a direct interaction between various acid/base transporters with different isoforms of carbonic anhydrase (9,10). Reports on interaction between NBC and CA goes back to the finding that application of the CA inhibitor acetazolamide inhibits the transport of bicarbonate across the basolateral cell membrane in renal proximal tubule of rabbits (11)(12)(13). By heterologous expression of the kidney NBCe1 in a mouse proximal tubule cell line, Gross et al. (14) found that CAII increases the short-circuit current through the transporter in an acetazolamide-sensitive manner, when the NBCe1 was operating in the 3 HCO 3 Ϫ :1 Na ϩ mode, but not in the 2:1 mode. In the same study, isothermal titration calorimetry experiments showed a binding between the transporter and CAII. Pushkin et al. (15) then showed in the mouse proximal tubule cell line that CAII activity enhances the bicarbonate flux via the NBCe1 and is likely to form a "transport metabolon" with the cotransporter. Experiments with mutated CAII-binding sites on the NBCe1 indicated that activity of CAII can only enhance the NBCe1 current, when it is bound to the NBCe1, whereas unbound CAII had no effect on NBCe1-mediated transport. Binding and interaction was also shown for the electroneutral NBC3 with CAII (16), and for the NBCe1 with extracellular CAIV (17). Recently, however, Lu et al. (18), by measuring the slope conductance of NBCe1-expressing Xenopus oocytes, injected with CAII or co-expressing a fusion protein of NBCe1 and CAII, found no CAII-induced increase in NBCe1 activity. The authors concluded that CAII does not form a transport metabolon with NBCe1, and that the enzymatic activity of CAII in the vicinity of the NBC is unlikely to enhance HCO 3 Ϫ transport substantially, when CO 2 is nearly in equilibrium across the cell membrane. Furthermore, by using solid-phase binding assays with enzyme-linked immunosorbent assay detection, the same group demonstrated that CAII cannot bind to pure SLC-Ct peptides, and can bind to GST-SLC4-Ct fusion proteins only when the CAII is immobilized and the fusion protein is soluble, but not vice versa (19). These results reject the idea of a physical and also functional metabolon formation between CAII and bicarbonate transporters. These results disagree with the findings of other groups (8,(15)(16)(17), reporting binding between CAII and transporters of the SLC4 family.
We have also addressed the question, if NBCe1 and CAII interact, by heterologously expressing human kidney NBCe1 in Xenopus oocytes with and without injected or co-expressed CAII. We used two mutants of CAII, a catalytically inactive mutant (CAII-V143Y), in which the deletion of the hydrophobic pocket of the enzyme's active site significantly diminishes enzyme activity by 3 ϫ 10 5 -fold (20,21), and a mutant with 6 exchanged amino acids in the NH 2 -terminal tail, which prevents binding of the mutant to the anion exchanger AE1. 3 We have measured the activity of both NBCe1 and CAII activity independently by recording cytosolic H ϩ and Na ϩ , the membrane current in voltage clamp, and by using mass spectrometry. Our results clearly indicate that CAII enhances the transport activity of the NBCe1 under these conditions.
Oocytes of stages V and VI were selected and injected with 14 ng of NBCe1/cRNA dissolved in DEPC/H 2 O using glass micropipettes and a microinjection device (Nanoliter 2000, World Precision Instruments, Berlin, Germany). Control oocytes were injected with an equivalent volume of DEPC/ H 2 O. CAII was either injected as protein or co-expressed with the NBCe1. For injection of protein, 50 ng of CAII, isolated from bovine erythrocytes (C3934, Sigma), dissolved in 25 nl of DEPC/H 2 O, were injected 20 -24 h before electrophysiological measurement. Control oocytes were injected with 25 nl of DEPC/H 2 O. For coexpression of CAII, 12 ng of CAII/cRNA were injected either alone or together with the NBCe1/cRNA. Intracellular pH and Na ϩ Measurements-For measurement of intracellular pH (pH i ) and membrane potential double-, and for intracellular Na ϩ (Na ϩ i ), single-barreled microelectrodes were used; the manufacture and application have been described in detail previously (23). Briefly, for doublebarreled microelectrodes, two borosilicate glass capillaries of 1.0 and 1.5 mm in diameter were twisted together and pulled to a micropipette. The ion-selective barrel was silanized with a drop of 5% tri-N-butylchlorsilane in 99.9% pure carbon tetrachloride, backfilled into the tip. The micropipette was baked for 4.5 min at 450°C on a hot plate. The H ϩ -sensitive mixture (Fluka 95291, Fluka, Buchs, Switzerland) was backfilled into the tip of the silanized ion-selective barrel and filled up with 0.1 M sodium citrate (pH 6.0). The reference barrel was filled with 3 M KCl. To increase the opening of the electrode tip, it was beveled with a jet stream of aluminum powder suspended in H 2 O. Calibration of the electrodes was carried out in oocyte saline by changing the pH by 0.6 units. The central and reference barrel of the electrodes were connected by chlorided silver wires to the head stages of an electrometer amplifier. Electrodes were accepted for use in the experiments, when their response exceeded 50 mV per unit change in pH; on average, they responded with 54 mV for unit change in pH. In the experimental chamber, they responded faster to a change in saline pH than the fastest reaction measured in the oocyte cytosol.
For single-barreled Na ϩ -sensitive microelectrodes, a 1.5-mm borosilicate glass capillary was silanized as described above and backfilled with a Na ϩ -sensitive mixture, made of 10 weight % sodium ionophore VI (Fluka 71739), 89.5 weight % 2-nitrophenylether (o-NPOE), and 0.5 weight % sodium tetraphenylborate. The pipette was filled up with 100 mM NaCl and 10 mM MOPS buffer (pH 7.0). Calibration of the electrodes was carried out in oocyte saline with Na ϩ concentrations of 5, 10, 15, and 84.5 mM; on average, the electrodes responded with 52 mV for a 10-fold change in the Na ϩ concentration.
As described previously (24), optimal pH changes were detected when the electrode was located near the inner surface of the plasma membrane. This was achieved by carefully rotating the oocyte with the impaled electrode. All experiments were carried out at room temperature (22-32°C amplifier (Axon Instruments). The experimental bath was grounded with a chlorided silver wire coated with agar dissolved in oocyte saline. Oocytes were clamped to a holding potential of Ϫ40 mV. To obtain current-voltage relationships, the membrane potential was changed stepwise in 20-mV increments between Ϫ120 and ϩ20 mV.
Fluorescent Staining of NBCe1 and CAII in the Oocyte-Frog oocytes, either injected with cRNA for NBCe1 and CAII, or with 50 ng of CAII, as well as native control oocytes, were fixed in 4% paraformaldehyde in phosphatebuffered saline. Oocytes were treated with 100% methanol and permeabilized with 0.1% Triton X-100. Unspecific binding sites were blocked with 3% bovine serum albumin and 1% normal goat serum. The cells were incubated in phosphate-buffered saline containing the primary antibodies (guinea pig anti-Na ϩ /HCO 3 Ϫ cotransporter polyclonal antibody (1:100) and rabbit anti-carbonic anhydrase II (human and bovine erythrocytes) and polyclonal antibody (1:1000, Chemicon International, Inc.) overnight at 4°C. Oocytes were then incubated with the secondary antibodies (Alexa Fluor 488 goat anti-guinea pig IgG and Alexa Fluor 546 goat anti-rabbit IgG, Molecular Probes, Inc.). The stained oocytes were analyzed with a laser scanning microscope (LSM 510, Carl Zeiss GmbH, Oberkochem, Germany), using whole oocytes, through which cross-sectional optical planes were laid, or the cell surface was viewed. Additionally oocytes were embedded in 2% agarose and sectioned into 300-m thick slices with a microtome (752M Vibroslice, Campden Instruments Ltd., United Kingdom).
Determination of CAII Activity-Activity of CAII was determined by monitoring the 18 47 ϩ 45)). For calculation of the CAII activity of the sample, the rate of 18 O degradation was obtained from the linear slope of the log (enrichment) over the time, using the spreadsheet analyzing software Origin 6.0 (Microcal Software Inc.). The rate was compared with the corresponding rate of the non-catalyzed reaction. Enzyme activity FIGURE 1. Fluorescence staining of NBCe1 and CAII in Xenopus oocytes. Surface view of oocytes expressing NBCe1 injected with CAII (A, E, and I), or co-expressing NBCe1 together with CAII-WT (B, and F), catalytically inactive mutant CAII-V143Y (C, and G), and NH 2 -terminal mutant CAII-HEX (D and H), respectively. Oocytes were incubated with a guinea pig antibody against NBCe1 and a rabbit antibody against human and bovine CAII, respectively, and secondary antibodies against guinea pig and rabbit, respectively, linked with a fluorescent dye. A-D, signal of the CAII. E-H, overlays of the signals for CAII and NBCe1. I, exemplary image of NBCe1 staining. The staining for NBCe1 in oocytes co-expressing NBCe1 with CAII-WT, CAII-V143Y, and CAII-HEX, respectively, gave similar results as shown in I. J-L, surface view on native control oocytes, incubated with a guinea pig antibody against NBCe1 (J) or a rabbit antibody against human CAII (K) or bovine CAII (L), and a secondary antibody against guinea pig or rabbit linked with a fluorescent dye. M-O, optical cross-sections of an oocyte co-expressing NBCe1 and CAII-V143Y, showing the signal for CAII-V143Y (M), the signal for NBCe1 (N), and an overlay of both signals (O). Staining of oocytes co-expressing NBCe1 with CAII-WT and CAII-HEX, respectively, as well as NBCe1-expressing oocytes injected with CAII from bovine erythrocytes gave similar results as shown in M-O. P, cross-section through a CAII-injected oocyte sectioned into 300-m slices. Q and R, optical cross-section of native control oocytes, incubated with a guinea pig antibody against NBCe1 (Q) or a rabbit antibody against human CAII (R), and a secondary antibody against guinea pig or rabbit linked to fluorescent dye.
in units was calculated from these two values as defined by Badger and Price (28). From this definition, 1 unit corresponds to 100% stimulation of the non-catalyzed 18 O depletion of dou-bly labeled 13 C 18 O 2 . For the experiments, the cuvette was filled with 10 ml of oocyte saline with a pH of 7.35 according to the mean intracellular pH of CAII-injected oocytes at 25°C. To determine the catalytic activity, CAII expressing oocytes were pipetted into the cuvette in a batch of 20. For calibration, 0.25, 0.5, 1, and 2 g of CAII were directly added to the cuvette and compared with the activity of oocytes, which either expressed CAII or were injected with CAII.
Calculation and Statistics-Statistical values are presented as mean Ϯ S.E. For calculation of significance in differences, Student's t test or, if possible, a paired t test was used. In the figures shown, a significance level of p Յ 0.05 is marked with a single asterisk, p Յ 0.01 a double asterisk, and p Յ 0.001 with triple asterisks.
RESULTS
Expression of NBCe1-NBCe1expressing oocytes either injected with CAII from bovine erythrocytes or co-expressing CAII-WT, CAII-V143Y, and CAII-HEX, respectively, were stained with Alexa dyelinked antibodies against an epitope of NBCe1 or CAII, respectively (see "Experimental Procedures"). Confocal images of the surface ( in the plasma membrane of the oocyte, whereas no staining was evident in the cytoplasm. An overlay of both signals (Fig. 1
, E-H and O)
shows co-localization of the two proteins. Native oocytes treated the same way with the dye-linked antibodies showed no visible staining ( Fig. 1, J-L, Q, and R). To confirm that cytosolic CAII was only localized in the plasma membrane, but not in the cytosol, and that the fluorescence was not quenched when optical slices were taken from intact oocytes, a CAII-injected oocyte was sectioned into 300-m thick slices. The confocal image of the upper part of the slice shows that CAII is only found in the oocytes plasma membrane, but not in the cytosol (Fig. 1P). The same is true for oocytes expressing CAII (data not shown). We do not know what anchors the CAII in the membrane, but because this effect is not only observed in NBCe1-expressing oocytes, but also in native oocytes injected with, or expressing, CAII, CAII itself may bind to the plasma membrane of oocytes.
We determined the transport activity of the NBCe1 expressed in Xenopus oocytes by simultaneously measuring the membrane current, the intracellular sodium activity, Na ϩ i , and the intracellular proton activity, H ϩ i , of oocytes voltageclamped to a membrane holding potential of Ϫ40 mV. The NBCe1 was challenged by changing from a HEPES-buffered to a 5% CO 2 , 24 mM HCO 3 Ϫ -buffered solution ( Fig. 2A). In NBCe1expressing oocytes, application of CO 2 /HCO 3 Ϫ induced a membrane outward current and a rise in the intracellular H ϩ and Na ϩ concentrations. Expression of NBCe1 led to a significant reduction of the CO 2 -induced intracellular acidification (⌬H ϩ ) from 91.1 Ϯ 6.1 to 31.0 Ϯ 3.3 nM (p Յ 0.001; Fig. 2C). The rate of cytosolic H ϩ change following introduction of CO 2 /HCO 3 Ϫ was also significantly changed by expressing NBCe1, being 33.0 Ϯ 3.3 and 16.4 Ϯ 1.6 nM/min in native and NBCe1-expressing oocytes, respectively (p Յ 0.001; Fig. 2D), presumably due to the dampening effect of HCO 3 Ϫ influx via the NBCe1 accompanying the diffusion of CO 2 into the cells. The acidification was accompanied by a membrane outward current (⌬I m ) of 302 Ϯ 30 nA (Fig. 2G) and an increase in cytosolic Na ϩ concentration (⌬Na ϩ ) of 4.17 Ϯ 0.56 mM (Fig. 2E) with a rate of 2.07 Ϯ 0.32 mM/min (Fig. 2F), which was not observed in control oocytes injected with H 2 O instead of NBCe1/cRNA (Fig. 2, B, F, and G). Furthermore, expression of the NBCe1 substantially increased the membrane conductance (G m ) of the oocytes in CO 2 /HCO 3 Ϫ -buffered saline from 0.8 Ϯ 0.1 to 6.1 Ϯ 0.1 S (p Յ 0.001; Fig. 3, A and C). Application of 0.5 mM of the anion exchange inhibitor DIDS increased the CO 2 /HCO 3 Ϫ -induced acidification in NBCe1-expressing oocytes by 158%, whereas the membrane current and membrane conductance decreased to 20 and 38%, respectively (data not shown), suggesting reduced NBCe1 activity in the presence of DIDS. The reversal potential (E rev ) of the CO 2 /HCO 3 Ϫ -induced current in NBCe1-expressing oocytes was Ϫ55.6 mV, suggesting a stoichiometry of 2 HCO 3 Ϫ :1 Na ϩ . If calculated from an intracellular Na ϩ concentration of 14.0 Ϯ 3.5 mM and an intracellular HCO 3 Ϫ concentration of 21.0 Ϯ 6.3 mM, reversal potentials of Ϫ52.6 and Ϫ28.0 mV are predicted for a stoichiometry of 2:1 and 3:1 for the NBCe1, respectively.
Effect of CAII Injection-Injection of CAII into native control oocytes resulted in a significant increase in the rate of rise of intracellular H ϩ concentration (⌬H ϩ /t) from 33.0 Ϯ 3.3 to 128.0 Ϯ 15.8 nM/min following introduction of CO 2 /HCO 3 Ϫ (p Յ 0.001; Fig. 2D). The amplitude of the CO 2 /HCO 3 Ϫ -induced acidification was, however, not changed by injection of CAII (92.5 Ϯ 6.3 versus 91.1 Ϯ 6.1 nM; Fig. 2C). Injection of CAII into native oocytes did not induce any membrane currents or changes in the cytosolic Na ϩ concentration (Fig. 2, B and E-G), and did not change the membrane conductance G m of the oocytes (0.8 Ϯ 0.1 versus 0.9 Ϯ 0.1 S; Fig. 3, A-C).
Injection of CAII into NBCe1-expressing oocytes increased the amplitude of the CO 2 /HCO 3 Ϫ -induced acidification from 31.0 Ϯ 3.3 to 51.0 Ϯ 6.5 nM (p Յ 0.05; Fig. 2C), and, like in native, control oocytes, also increased the rate of rise of the intracellular H ϩ concentration following application of CO 2 / HCO 3 Ϫ from 16.4 Ϯ 1.6 to 145.1 Ϯ 35.5 nM/min (p Յ 0.001; Fig. 2D). In contrast to control oocytes, however, injection of CAII into NBCe1-expressing oocytes increased the membrane current in CO 2 /HCO 3 Ϫ -buffered solution from 302 Ϯ 30 to 568 Ϯ 37 nA (p Յ 0.001; Fig. 2G), and the rate of rise in Na ϩ concentration from 2.07 Ϯ 0.32 to 8.20 Ϯ 1.80 mM/min (p Յ 0.05; Fig. 2F). The amplitude of the cytosolic Na ϩ change was not significantly different, indicating an increase in transport rate, but not in absolute amount of transported substrates. Furthermore, injection of CAII added an additional membrane conductance of 2.5 Ϯ 1.6 S (Fig. 3, B-D), leading to a membrane conductance of 8.7 Ϯ 0.6 S in NBCe1-expressing oocytes (Fig. 3, A and C). The membrane current in NBCe1-expressing oocytes with CAII injected reversed at Ϫ53.5 mV, as compared with Ϫ55.6 mV in oocytes not injected with CAII (Fig. 3A). The membrane current attributable to CAII, isolated by subtraction of the Ϫ -buffered solution. B, CAII-dependent current-voltage relationship (obtained by subtraction of the currents in NBCe1-expressing and in native oocytes with and without injected CAII in CO 2 /HCO 3 Ϫ -buffered solution). In NBCe1 expressing, but not in native, oocytes, CAII adds an additional membrane conductance of 2.5 S (C and D).
I/V curves, had a reversal potential of Ϫ46.2 mV, and a slope conductance of 2.5 Ϯ 1.6 S (Fig. 3, B and D). Thus, these results suggest that injection of CAII into NBCe1-expressing oocytes enhances the transport activity of NBCe1, as indicated by the increased CO 2 /HCO 3 Ϫ -induced membrane current and conductance, as well as by the increased rate of rise of cytosolic Na ϩ .
Effect of EZA on CAII-injected Oocytes-To check whether the increase in NBCe1 activity by CAII injection depends on the catalytic activity of the CAII, we used the potent enzyme inhibitor 6-ethoxy-2-benzothiazolesulfonamide (EZA, 10 M). Application of EZA to NBCe1-expressing oocytes injected with CAII reduced the rate of rise of the intracellular H ϩ concentration from 216.2 Ϯ 48.7 to 18.6 Ϯ 3.0 nM/min (p Յ 0.01; Fig. 4, A and C), the latter value being similar to the values of NBCe1expressing oocytes not injected with CAII before (18.6 Ϯ 3.0 nM/min) and after (22.2 Ϯ 5.1 nM/min) application of EZA (Fig. 4C, n.s.). This also suggests that the CA activity in native Xenopus oocytes is negligible. EZA decreased the membrane currents in NBCe1expressing oocytes injected with CAII from 513 Ϯ 55 to 376 Ϯ 48 nA (p Յ 0.001), a value similar to those of currents in NBCe1-expressing cells not injected with CAII, being 282 Ϯ 29 nA before, and 257 Ϯ 36 nA after application of EZA (Fig. 4B, n.s.). The rate of the CO 2 /HCO 3 Ϫinduced rise in the cytosolic Na ϩ concentration was decreased in the presence of EZA from 8.20 Ϯ 1.18 to 1.91 Ϯ 0.60 mM/min (p Յ 0.05), a value similar to those for oocytes not injected with CAII, being 2.07 Ϯ 0.32 and 1.86 Ϯ 0.30 mM/min in the absence and presence of EZA, respectively (Fig. 4D, n.s.). Application of 0.1 (v/v) ethanol, the dissolvent of EZA, alone had no effect on the CO 2 /HCO 3 Ϫ -induced membrane current, rate of rise of intracellular H ϩ concentration, and membrane conductance (Fig. 4E), indicating that the observed effects of EZA were directly related to the blocker and not to the dissolvent. In summary, blocking the catalytic activity of CAII with EZA reversed the CAII-induced increase in the rate of rise of the cytosolic H ϩ and Na ϩ concentrations and in the membrane current, but had no significant effect on these parameters in oocytes not injected with CAII. This indicates that the CAII-mediated enhancement of NBCe1 transport activity is attributable to the catalytic activity of CAII.
Dose-response Curve of CAII-Whereas other studies working with CAII injected into Xenopus oocytes used an amount of 50 ng of CA/oocyte (29, 30), Lu et al. (18) injected an overall amount of 300 ng/oocyte. To determine the dependence of the CAII-mediated increase in NBCe1 activity on the concentration of CAII, we injected 2, 10, 50, or 200 ng of CAII into NBCe1 expressing oocytes and determined the NBCe1 activity by measuring membrane current and conductance during application of 5% CO 2 , 24 mM HCO 3 Ϫ -buffered solution in the absence and presence of the CAII inhibitor EZA (10 M). Both, the CO 2 /HCO 3 Ϫ -induced membrane current and the membrane conductance showed a dependence on the concentration of injected CAII in the presence of CO 2 /HCO 3 Ϫ (Fig. 5, A and C, filled symbols). In the presence of EZA, no dependence of membrane current and membrane conductance on CAII concentra- Ϫ before and after addition of EZA. B-D, statistical summary of the parameters as measured in A. Changes in the membrane current (B), the rate of change of the intracellular H ϩ concentration (C), and of the intracellular Na ϩ concentration (D) in NBCe1-expressing oocytes, either injected with CAII or H 2 O during application of CO 2 /HCO 3 Ϫ before and after addition of 10 M EZA. EZA decreased the membrane current, the rate of rise of the intracellular H ϩ and Na ϩ concentration in CAII-injected oocytes, but had no effect on oocytes only expressing NBCe1. E, original recordings from a NBCe1-expressing oocyte injected with 50 ng of CAII in the absence (gray trace) and presence (black trace) of ethanol (0.1, v/v), the dissolvent of the CAII inhibitor ethoxyzolamide. The upper trace shows the changes in membrane current, the lower trace the intracellular H ϩ concentration during application of 5% CO 2 , 24 mM HCO 3 Ϫ .
tion was observed (Fig. 5, A and C, open symbols). Subtracting the CO 2 /HCO 3 Ϫ -induced membrane current and conductance in the presence of EZA from the currents and conductance observed before application of the CA inhibitor, gives the EZAsensitive membrane current and conductance as shown in Fig. 5, B and D. Both, the EZA-sensitive membrane current and the EZA-sensitive membrane conductance show a clear dependence on the CAII concentration. The curves indicate that halfmaximal effects on current and conductance of NBCe1 were obtained with a concentration of 20 -30 ng CAII.
CAII Activity in Oocytes Analyzed with Mass Spectrometry-We also co-expressed wild-type CAII (CAII-WT) with the NBCe1 to see if, under these conditions, a similar interaction between the two proteins is observed. To compare the amount of expressed with the amount of injected CAII protein in the oocytes, we determined the enzymatic activity of CAII injected into, and CAII-WT expressed alone or together with the NBCe1, in oocytes by mass spectrometry. Original recordings of the log enrichment of either 20 oocytes injected with 50 ng of CAII, expressing CAII-WT alone and together with NBCe1, and of 1 g of CAII directly added into the measuring cuvette, are shown in Fig. 6A. The first part of the curve gives the noncatalyzed degradation of labeled CO 2 . The black arrow indicates the addition of either oocytes or enzyme as indicated. The statistical analysis of the enzyme activity calculated from the difference of the non-catalyzed and the CAII-catalyzed change in the log enrichment shows CAII activity in all probes (Fig. 6B). 20 oocytes injected with CAII showed an activity of 25.4 Ϯ 1.7 units/ml, which was in the same range as CAII-WT expressed together with the NBCe1 with an activity of 25.3 Ϯ 5.2 units/ml. CAII-WT expressed in native oocytes showed a slightly higher activity of 33.3 Ϯ 1.7 units/ml (n.s.). We calibrated the CAII activity by measuring the enzymatic activity of defined amounts of CAII protein (0.25, 0.5, 1, and 2 g) to generate a calibration curve (Fig. 6C). Using the linear regression of this relationship, the amount of protein was calculated from the measured enzyme activity (Fig. 6D). Native oocytes injected with cRNA for CAII-WT expressed an average amount of 64.5 Ϯ 3.3 ng of CAII/oocyte, whereas oocytes additionally injected with NBCe1/cRNA expressed 49.3 Ϯ 10.0 ng of CAII/oocyte, which corresponds well with the amount of CAII injected into each oocyte (50 ng). Calculating the amount of active CAII in oocytes injected with 50 ng of CAII each, resulted in an amount of 49.4 Ϯ 3.2 ng of CAII/oocyte, indicating that virtually all CAII injected into the oocytes displayed catalytic activity.
Co-expression of NBCe1 with CAII-WT-The NBCe1 activity was measured in oocytes, in which NBCe1 was co-expressed with CAII-WT, by simultaneously recording the membrane current, Na ϩ i , and H ϩ i of voltage-clamped oocytes during application of 5% CO 2 , 24 mM HCO 3 Ϫ before (Fig. 7A, black traces) and after the addition of the CAII inhibitor EZA (10 M; Fig. 7A, gray traces). Application of CO 2 /HCO 3 Ϫ to NBCe1 and CAII-WT co-expressing oocytes led to an intracellular acidification of 13.1 Ϯ 2.0 nM, whereas application of CO 2 /HCO 3 Ϫ to oocytes expressing NBCe1 alone induced an alkalinization of 8.3 Ϯ 5.0 nM (Fig. 7, C and D). Application of EZA to NBCe1 ϩ CAII-WT co-expressing oocytes resulted in an alkalinization during application of CO 2 /HCO 3 Ϫ (4.6 Ϯ 4.6 nM). The alkalinization is likely to be due to the import of HCO 3 Ϫ via the NBCe1. Expression of CAII-WT accelerates the hydration of CO 2 to an extent that the formation of H ϩ is faster than the import of HCO 3 Ϫ via the NBCe1, resulting in an intracellular acidification. Blocking CAII-WT with EZA reduces the rate of CO 2 hydration leading to an alkalinization during application of CO 2 / HCO 3 Ϫ , presumably due to the fast inward transport of HCO 3 Ϫ via NBCe1. Co-expression of CAII together with the NBCe1 lead to a significant increase in the membrane current during application of CO 2 /HCO 3 Ϫ from 720 Ϯ 40 nA in oocytes expressing NBCe1 alone, to 1024 Ϯ 52 nA (p Յ 0.001). This increase in current was largely reversed by EZA, which decreased the current to 763 Ϯ 40 nA (p Յ 0.001; Fig. 7B). EZA had no effect on the membrane current in oocytes expressing NBCe1 alone.
The rate of rise of the cytosolic Na ϩ concentration in oocytes expressing NBCe1 ϩ CAII-WT was 7.24 Ϯ 1.29 mM/min, which was significantly larger than in oocytes expressing NBCe1 alone (1.69 Ϯ 0.31 mM/min; p Յ 0.01). EZA reduced the rate of Na ϩ rise significantly to 2.75 Ϯ 0.60 mM/min (p Յ 0.001). In oocytes expressing NBCe1 alone, EZA had no effect on the rate of Na ϩ i rise, being 1.69 Ϯ 0.31 mM/min before, and 1.39 Ϯ 0.39 mM/min during application of EZA (Fig. 7F). The amplitude of the increase in intracellular Na ϩ concentration mediated by the NBCe1 was neither changed by co-expression with CAII-WT nor by application of EZA (Fig. 7E).
The current-voltage relationships of oocytes expressing NBCe1 alone and together with CAII-WT, in the presence and absence of EZA, were very similar (Fig. 8A). It must be noted that in these batches of oocytes, the background currents and conductance were relatively large, being around 1 A at 0 mV with a slope conductance of 16 -18 S, respectively (Fig. 8, A and E). The reversal potential was not changed by the expression of CAII-WT or EZA, and values between Ϫ55.2 and Ϫ56.6 mV in the different types of oocytes were measured (Fig. 8A). By subtracting the currents of oocytes expressing NBCe1 with and without CAII-WT, a current attributable to CAII-WT could be isolated (Fig. 8B), which amounted to 150 nA at 0 mV, very similar to that obtained for injected CAII (see Fig. 3B).
In 5% CO 2 , 24 mM HCO 3 Ϫ -buffered solution, the membrane conductance of oocytes expressing NBCe1 ϩ CAII-WT was 18.2 Ϯ 0.7 S before and 16.3 Ϯ 0.8 S after addition of EZA, respectively (p Յ 0.001; Fig. 8, A and E). NBCe1-expressing oocytes without CAII-WT had a membrane conductance of 15.7 Ϯ 0.7 S before and 16.8 Ϯ 0.8 S during application of EZA (p Յ 0.01; Fig. 8, A and E). Co-expression of CAII-WT together with NBCe1 added an additional membrane conductance of 2.2 Ϯ 0.1 S in the presence of CO 2 /HCO 3 Ϫ (Fig. 8, B and F), whereas expression of CAII-WT without NBCe1 had no effect on the membrane conductance. Application of 10 M EZA in CO 2 /HCO 3 Ϫbuffered solution increased the membrane conductance of NBCe1expressing oocytes without CAII by 1.1 Ϯ 0.1 S, and with CAII-WT, decreased G m by 1.2 Ϯ 0.2 S (Fig. 8C). Subtracting the EZA-sensitive currents of NBCe1 ϩ CAII-WT-expressing oocytes from the EZA-sensitive currents of oocytes expressing NBCe1 alone revealed the effect of EZA on the CAII-induced current (Fig. 8D), amounting to 130 nA at 0 mV. The slope of the linear regression line gives a conductance of 2.4 Ϯ 0.2 S, a value close to the 2.2 Ϯ 0.1-S increased mediated by CAII-WT (Fig. 8D) as indicated by the dotted line.
Co-expression of NBCe1 with CAII Mutants-To specify the role of the catalytic activity of CAII as well as a possible binding of the enzyme for the augmentation of NBCe1 activity, we coexpressed the NBCe1 with the catalytically inactive mutant CAII-V143Y, and the NH 2 -terminal mutant CAII-HEX. The activity of the NBCe1 was measured in oocytes, in which NBCe1 was co-expressed with CAII-HEX or CAII-V143Y, by simultaneously recording the membrane current, Na ϩ i , and H ϩ i of voltage-clamped oocytes during application of 5% CO 2 , 24 mM HCO 3 Ϫ before (Fig. 9A, black traces) and after the addition of the CAII inhibitor EZA (10 M ; Fig. 9A, gray traces; Table 1). Similar to oocytes co-expressing NBCe1 with CAII-WT, application of 5% CO 2 , 24 mM HCO 3 Ϫ resulted in NBCe1 ϩ CAII-HEX co-expressing oocytes in a robust acidification, whereas application of CO 2 /HCO 3 Ϫ in the presence of 10 M EZA induced a slight alkalinization (Fig. 9, A and C). Like in CAII-WT expressing oocytes, co-expression of NBCe1 with CAII-HEX resulted in a CO 2 /HCO 3 Ϫ -induced membrane current of 983 Ϯ 21 nA (Fig. 9, A and B), an increase in intracellular Na ϩ with a rate of rise of 5.29 Ϯ 0.98 mM/min (Fig. 9, A and D), and a membrane conductance of 17.7 Ϯ 0.3 S (Fig. 9, E and F). All three indicators of NBCe1 activity were significantly decreased by the application of the CA inhibitor EZA. The membrane current fell to 718 Ϯ 28 nA (p Յ 0.001), the rate of rise in Na ϩ i was reduced to 2.71 Ϯ 0.57 mM/min (p Յ 0.05), and the membrane conductance decreased to 13.3 Ϯ 0.3 S (p Յ 0.01).
Co-expression of NBCe1 together with the catalytically inactive mutant CAII-V143Y resulted in a small CO 2 /HCO 3 Ϫ -induced acidification before, and a small alkalinization during, application of EZA (Fig. 9, A and C). Membrane current, rate of rise in intracellular Na ϩ concentration, and membrane conductance of NBCe1 ϩ CAII-V143Y co-expressing oocytes were not significantly different from the values of oocytes expressing NBCe1 alone as shown in Fig. 7, none of these parameters showed any sensitivity to EZA. The CO 2 /HCO 3 Ϫ -induced membrane current was 644 Ϯ 61 nA before and 696 Ϯ 58 nA during, application of EZA (Fig. 9B, n.s.), the rate of rise in Na ϩ i was 1.78 Ϯ 0.29 mM/min before and 1.72 Ϯ 0.48 mM/min during application of EZA (Fig. 9D, n.s.), and the membrane conductance was 14.0 Ϯ 0.2 S before and 14.3 Ϯ 0.2 S during, application of EZA (Fig. 9, E and F, n.s.).
We also measured the enzymatic activity of the expressed CAII mutants by mass spectrometry. 20 oocytes expressing CAII-HEX alone showed a catalytic activity of 41.2 Ϯ 1.9 units/ml, whereas 20 oocytes coexpressing CAII-HEX together with NBCe1 had an activity of 36.7 Ϯ 0.7 units/ml (Fig. 9G). Using the calibration curve shown in Fig. 6C, an amount of CAII-HEX of 79.7 Ϯ 6.2 and 71.0 Ϯ 1.8 ng/oocyte was calculated for oocytes expressing CAII alone and together with NBCe1, respectively (Fig. 9H). Oocytes expressing the catalytically inactive mutant CAII-V143Y alone or together with NBCe1 showed a low catalytic activity of 2.6 Ϯ 0.5 and 1.9 Ϯ 0.8 units/ml, respectively (Fig. 9G), indicating the significant catalytic deficit of the mutant CAII-V143Y.
Dependence of the CAII-induced Activation of NBCe1 on the Membrane Potential-It was studied if the increase in NBCe1 activity as induced by CAII was due to voltage clamping the oocytes to Ϫ40 mV during the application of CO 2 / HCO 3 Ϫ , which lead to a strong activation of the transporter. We therefore measured the NBCe1 activity also at Ϫ80 mV, where the NBCe1 is less active (25), and compared the effect of injected CAII on NBCe1 activity at the different holding potentials. Fig. 10A shows an original recording of a NBCe1-expressing oocyte injected with 50 ng of CAII, clamped to a membrane potential of either Ϫ40 or Ϫ80 mV. With CAII injected, the CO 2 /HCO 3 Ϫ -induced membrane currents in NBCe1-expressing oocytes were 1105 Ϯ 47 and 774 Ϯ 42 nA at Ϫ40 and Ϫ80 mV, respectively (p Յ 0.001; Fig. 10B). Both values were significantly larger than in oocytes not injected with CAII, being 609 Ϯ 39 nA at Ϫ40 mV (p Յ 0.001) and 485 Ϯ 50 nA at Ϫ80 mV (p Յ 0.001). The rate of change of the intracellular H ϩ concentration increased by changing the membrane potential from Ϫ40 to Ϫ80 mV from 53.9 Ϯ 10.5 to 76.7 Ϯ 16.9 nM/min in NBCe1-expressing oocytes with CAII (p Յ 0.05), which was significantly larger than the rate of change in NBCe1-expressing cells without injected CAII with 12.6 Ϯ 2.4 nM/min at Ϫ40 mV (p Յ 0.01) and 18.1 Ϯ 4.6 nM/min at Ϫ80 The upper traces show the changes in membrane current, the middle traces the intracellular H ϩ concentration, and the lower traces the intracellular Na ϩ concentration during application of 5% CO 2 , 24 mM HCO 3 Ϫ before (black traces) and after (gray traces) addition of the CAII inhibitor EZA (10 M). B-F, statistical summary of the membrane current (B), the absolute change (C), and the rate of change (D) in the intracellular H ϩ concentration and the absolute change (E), and the rate of change (F) in the intracellular Na ϩ concentration in oocytes co-expressing NBCe1 and CAII-WT, or expressing NBCe1 alone. mV (p Յ 0.01; Fig 10C). The rate of rise of the intracellular Na ϩ concentration decreased with a more negative membrane potential; injection of CAII induced an increase in the rate of rise from 2.55 Ϯ 1.55 to 7.51 Ϯ 0.83 mM/min at Ϫ40 mV (p Յ 0.05) and from 2.04 Ϯ 0.69 to 5.00 Ϯ 0.48 mM/min at Ϫ80 mV (p Յ 0.05; Fig. 10D). The membrane conductance as calculated from the slope of the I/V curves (Fig. 10E) was not affected by changing the membrane holding potential from Ϫ40 to Ϫ80 mV (Fig. 10F). At both holding potentials, the conductance of NBCe1-expressing cells in CO 2 /HCO 3 Ϫ -buffered solution was larger, when CAII had been injected (19.7 Ϯ 0.4 S as compared with 15.3 Ϯ 0.3 S at Ϫ40 mV, p Յ 0.01, and 19.2 Ϯ 0.2 S as compared with 15.4 Ϯ 0.4 S at Ϫ80 mV, p Յ 0.01).
The CAII-induced activation of NBCe1 was also measured, when CO 2 /HCO 3 Ϫ was applied under current-clamp conditions (Fig. 11A), and compared with that when the oocyte membrane was held in voltage clamp (Fig. 11B, VC). Because application of CO 2 /HCO 3 Ϫ activates the NBCe1, a large voltage gradient would be created with the oocyte membrane held at Ϫ40 mV. On the other hand, with the oocyte membrane not voltage-clamped, a substantial hyperpolarization is observed, which would be expected to decrease the driving force of NBCe1, and hence reduce NBCe1 activity. The membrane hyperpolarization was not significantly larger, when CAII had been injected into the oocytes (Fig. 11C), although the I/V relationships (Fig. 11E) indicated a larger slope conductance (Fig. 11G), being 19.9 Ϯ 0.2 S with CAII and 15.7 Ϯ 0.4 S without CAII, when the cell was in current clamp (p Յ 0.05; n ϭ 6), and only briefly (Ͻ1 min) held in voltage clamp to obtain the I/V curve. The CO 2 /HCO 3 Ϫ -induced membrane current in voltage clamp increased when CAII had been injected (Fig. 11D), and the I/V relationships (Fig. 11F) indicated a slope conductance of 21.2 Ϯ 0.6 S with CAII and 18.2 Ϯ 0.5 S without CAII, when the oocyte was in voltage clamp throughout (p Յ 0.05; n ϭ 6; Fig. 11G).
DISCUSSION
The present study has analyzed the activity of the human kidney NBCe1 expressed in Xenopus oocytes with and without injected or co-expressed CAII. The results indicate that CAII enhances transport activity of the NBCe1, and supports the conclusion of others that NBCe1 forms a transport metabolon with CAII (14,15), but is in contrast to a recent study, which did not find evidence for functional interaction between NBCe1 and CAII (18). Our conclusion is based on recording membrane current and cytosolic Na ϩ as measures for NBCe1 transport activity. Both current and the rate of cytosolic Na ϩ rise following addition of CO 2 /HCO 3 Ϫ are increased in oocytes, which had been injected or co-expressed with CAII. An increase in the membrane slope conductance, indicative for enhancement of NBCe1 transport activity, could also be attributed to injected or co-expressed CAII. All these effects of CAII on the NBCe1 activity could be reversed by blocking the catalytic activity of CAII by EZA. Dependence of the CAII-induced augmentation of NBCe1 activity on the enzyme's catalytic activity was also confirmed by a catalytically inactive mutant of CAII (CAII-V143Y). Co-expression of CAII-V143Y with NBCe1 failed to increase NBCe1 activity. The activity of injected and expressed CAII in oocytes was demonstrated by the substantial increase in the rate of H ϩ change following addition of CO 2 /HCO 3 Ϫ , as compared with oocytes without CAII, and by measuring and calibrating the catalytic activity of CAII by mass spectrometry.
CAII binding to various acid/base transporters, such as the anion exchanger AE1 or the sodium/hydrogen exchanger NHE, has been shown and can result in considerable augmentation of transport activity of these proteins (8,9,31). Similarly, several studies have reported interaction of NBC isoforms with carbonic anhydrases, in particular NBCe1 with CAII (14,15), and NBC1 and NBC3 with CAIV (16,17). These studies were either performed in the transfected mouse proximal convoluted tubule cell line (14,15) or in transfected HEK293 cells. In the only other study using Xenopus oocytes as the heterlogous expression system, Lu et al. (18) measured the slope conductance of oocytes expressing NBCe1 with and without injected CAII, and employed a double fusion protein with NBCe1 and CAII attached to enhanced green fluorescent protein to evaluate the effect of CAII on NBCe1 activity. Lu et al. (18) found that injecting CAII into oocytes markedly accelerated hydration of intracellular CO 2 , but had no effect on the slope conductance of the NBCe1 current, and that the conductance of oocytes expressing EGFP-NBCe1-CAII was insensitive to the CAII inhibitor EZA. Hence, our study comes to an opposite conclusion as the studies by Lu et al. (18) and Piermarini et al. (19). The upper traces show the changes in membrane current, the middle traces the intracellular H ϩ concentration, and the lower traces the intracellular Na ϩ concentration during application of 5% CO 2 , 24 mM HCO 3 Ϫ before (black traces) and after (gray traces) addition of the CAII inhibitor EZA (10 M). B-F, statistical summary of the membrane current (B), the rate of change in the intracellular H ϩ concentration (C), the rate of change in the intracellular Na ϩ concentration (D), and the membrane conductance (F) derived from the current-voltage relationships as shown in E in oocytes co-expressing NBCe1 and CAII-HEX or CAII-V143Y. G, enzymatic activity of CAII-HEX and CAII-V143Y in units/ml expressed either alone or together with NBCe1 as determined by mass spectrometry, and H, amount of expressed CAII-HEX in nanograms/oocyte as calculated from the enzymatic activity in G.
In the present study, we investigated a possible interaction between NBCe1 and CAII by expressing the transporter in Xenopus oocytes either injected with CAII or coexpressing CAII-WT and CAII mutants. One problem with determining the flux of HCO 3 Ϫ by the NBCe1, as done in different studies, is the determination of the rate of pH change, which is directly affected by the activity of CAII, making it difficult to discriminate between a change in HCO 3 Ϫ flux through the NBCe1 and the direct effect of CAII on the changes in pH.
To check for possible interaction between the two proteins, we therefore measured the changes in membrane current and cytosolic Na ϩ concentration, two parameters that are directly and exclusively related to the NBCe1 transport activity (25,32). Both parameters did not change in native oocytes or in oocytes only injected with CAII, showing that they are both attributable to the expression of NBCe1. Another indicator for NBCe1 activity, also used by Lu et al. (18), was the membrane slope conductance of the oocytes, which was increased in NBCe1-expressing oocytes in the presence of CO 2 /HCO 3 Ϫ , but remained largely unchanged in the nominal absence of CO 2 , when the NBCe1 is not active (32).
Lu et al. (18) did not measure cytosolic Na ϩ , and analyzed the membrane current only in terms of its slope conductance, which was not changed by injection of 300 ng of CAII. The slope conductance of their oocytes expressing NBCe1 was FIGURE 10. Effect of the membrane potential on the CAII-induced activation of NBCe1. A, original recordings from a NBCe1-expressing oocyte injected with 50 ng of CAII at a membrane potential of Ϫ40 and Ϫ80 mV. The upper trace shows the membrane current, the middle trace the intracellular H ϩ concentration, and the lower trace the intracellular Na ϩ concentration during application of 5% CO 2 , 24 mM HCO 3 Ϫ . B-F, statistical summary of the parameters as measured in A, the changes in the membrane current (B), the rate of change of the intracellular H ϩ concentration (C), and the intracellular Na ϩ concentration (D) in NBCe1-expressing oocytes, either injected with CAII or H 2 O, during application of CO 2 /HCO 3 Ϫ . E, I/V relationships of NBCe1expressing oocytes either injected with CAII (filled symbols) or H 2 O (open symbols) in 5% CO 2 , 24 mM HCO 3 Ϫbuffered solution at a holding membrane potential of Ϫ40 mV (‚/OE) or Ϫ80 mV (/ƒ). F, slope conductance as calculated from the I/V relationships shown in E. Injection of CAII increased the rate of rise of the concentrations of intracellular H ϩ and Na ϩ , the membrane current and the membrane conductance both at a membrane holding potential of Ϫ40 and Ϫ80 mV. 3 ؊ -induced changes in membrane current, rate of rise in Na ؉ concentration and membrane conductance of NBCe1expressing oocytes either injected with CAII or co-expressing CAII-WT, CAII-V143Y, and CAII-HEX, respectively, before and during application of the CA inhibitor EZA (10 M) between 14 and 18 S, and they found a significant decrease of the slope conductance by injecting Tris buffer with and without CAII (Fig. 4 of Ref. 18). In our oocytes with NBCe1 expressed, the slope conductance ranged between 6 S and nearly 21 S, and we could easily isolate a CAII-attributable current and conductance (Figs. 3 and 8 -11). The expression level of NBCe1 in the oocytes might have been higher in the study by Lu et al. (18), because they injected 25 ng of NBCe1/cRNA, whereas in our study, 14 ng of NBCe1/cRNA was injected. This may have resulted in a significantly higher membrane conductance of the oocytes in their study (18). The amount of CAII in the oocytes also differed in the two studies; we injected 50 ng of CAII/ oocyte, as previously also used by Boron's group (29) and by us (30), whereas Lu et al. (18) injected 300 ng of CAII/oocyte. It is not clear why this high amount of CAII was injected by Lu et al. (18), because 50 ng was found sufficient by the same group before. We determined the concentration dependence of the CAII effect on NBCe1 activity and injected different CAII concentrations. The resulting dose-response curve indicated that 50 ng of CAII lead to a more than half-maximal augmentation of NBCe1 activity. Native oocytes have virtually no CAII, as indicated by applying EZA on the CO 2 -induced rate of rise of cytosolic H ϩ (Refs. 28 and 29 and this study). We could also show by mass spectrometry that all CAII injected into oocytes remained active.
In the oocytes with both NBCe1 and CAII co-expressed, our oocytes also had a slope conductance in the range between 15 and 19 S, similar to the oocytes in the study of Lu et al. (18). However, if the I/V curves of these oocytes were subtracted, a current could still be isolated with a slope of 2.2 S, similar to that found in NBCe1-expressing oocytes with injected CAII. It can only be speculated why the conductance of the (NBCe1-expressing) oocytes can vary by more than 2-fold. First, oocyte batches may vary in their conductance, the expression level of NBCe1 may vary and impose a different conductance, in particular when co-expressed with CAII, and the multiple injection of the oocytes to inject cRNA and/or CAII or H 2 O can affect the membrane conductance. Because the oocytes in the study of Lu et al. (18) in both series had similar, but high conductance, and because they only used the slope conductance, a conductance of 2-3 S could have easily been missed, the more, as EZA appears to increase the membrane conductance on its own in NBCe1-expressing oocytes, as both Lu et al. (18) and we have found in the present study.
In a discussion with Dr. Walter Boron following the presentation of their results (18) and our study in a meeting, we performed additional experiments (Figs. 10 and 11). Because the membrane of NBCe1-expressing oocytes hyperpolarizes up to about Ϫ120 mV upon the introduction of CO 2 /HCO 3 Ϫ due to the strong activation of NBCe1, voltage clamping of the oocyte membrane to Ϫ40 mV keeps the NBCe1 at a high activity. Lu et al. (18) had taken their I/V relationship near the zero current FIGURE 11. Changes in membrane potential and membrane current in NBCe1-expressing oocytes with and without injected CAII during application of CO 2 /HCO 3 ؊ . Original recordings from a NBCe1-expressing oocyte injected with 50 ng of CAII in current clamp (CC; A) and in voltage clamp (VC; Ϫ40 mV; B). The upper traces show the changes in the membrane current, the lower traces the changes in membrane potential. To generate an I/V relationship the oocyte in A was taken into voltage clamp for a short period (Ͻ1 min) as indicated by the VC bar in A and the membrane potential was changed stepwise between Ϫ120 and ϩ20 mV. potential; this might have occluded an effect of CAII on the NBCe1, because the activity of the cotransporter was too small. When voltage clamping the oocyte membrane at Ϫ80 mV, where the electrochemical gradient for NBCe1 activity upon application of CO 2 /HCO 3 Ϫ was much smaller, CAII still enhanced NBCe1 activity. Even if we reproduced the experiments of Lu et al. (18), recording the free oocyte membrane potential during application of CO 2 /HCO 3 Ϫ , and then took the I/V relationship keeping the oocyte membrane only briefly in voltage clamp, the NBCe1 activity was still increased in the presence of CAII, as measured by the membrane current and the slope conductance of the I/V relationships.
It has been shown that phosphorylation of kNBC1-Ser 982 by cAMP-protein kinase A shifts the stoichiometry of the NBC from 3:1 to 2:1 and that this phosphorylation might lower binding of CAII (14,15). However, expressed in Xenopus oocytes, the NBCe1 operates with a stoichiometry of 2:1 (25,33,34). With a measured reversal potential of NBCe1-expressing oocytes in CO 2 /HCO 3 Ϫ -buffered saline of Ϫ55.6 mV, the present study also supports a stoichiometry of 2:1 (with a predicted reversal potential of Ϫ54.4 mV for 2:1 and Ϫ27.8 mV for 3:1). Injection or co-expression of CAII did not change the reversal potential (Ϫ54 and 57 mV, respectively), indicating that NBCe1 still operates in the 2:1 mode when interacting with CAII. In contrast to the studies of Gross et al. (14) and Pushkin et al. (15), our data give no evidence about whether binding and interaction of CAII with NBCe1 depends on the transporter stoichiometry regulated by protein kinase A.
Pushkin et al. (15) identified two clusters of acidic amino acids ( 958 LDDV and 986 DNDD) as putative CAII binding sites in wild type kNBC1. These clusters show homology to the CAII binding site of the anion exchanger AE1 886 LDADD (35). Therefore, we also tested a CAII mutant with changes in the NH 2 -terminal (CAII-HEX), which is unable to bind to AE1, 3 on the NBCe1. CAII-HEX increased the activity of NBCe1 similar to CAII-WT, suggesting that CAII-HEX can still interact with NBCe1. If binding of CAII to NBCe1 is not obligatory for augmentation of transport activity or CAII-HEX binds to NBCe1 at different binding sites than AE1 remains to be studied.
In summary, our study does not contribute to the question, if binding of CAII to NBCe1 is necessary for this interaction, as Gross et al. (14) suggested, but Lu et al. (18) and Piermarini et al. (19) refute. However, by the membrane currents attributable to NBCe1 and by the rate of cytosolic Na ϩ rise, our results clearly indicate that CAII enhances NBCe1 activity, in line with a transport metabolon as suggested by previous studies (10,14,15). | 12,495 | 2007-05-04T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Metal-Ligand Coordination Induced Ionochromism for π-Conjugated Materials
Recent studies indicated that the toxicity of heavy metal ions caused a series of environmental, food, and human health problems. Chemical ionochromic sensors are crucial for detecting these toxicity ions. Incorporating organic ligands into π-conjugated polymers made them receptors for metal ions, resulting in an ionochromism phenomenon, which is promising to develop chemosensors for metal ions. This review highlights the recent advances in π-conjugated polymers with ionochromism to metal ions, which may guide rational structural design and evaluation of chemosensors.
In recent years, researches on the π-conjugated system are booming in the fields of organic optoelectronics and sensors based on chromogenic effect. Chromogenic systems are required to be responsive to external inputs, such as metal ions (ionochromism), (Cheng and Tieke, 2014) electrons (electrochromism), (Thakur et al., 2012), or light (photochromism) (Bisoyi and Li, 2016) that could be used for image production. Among these chromogenic systems, ionochromism is realized through the coordination between metal and ligand, resulting in sensitivity to metal ions, which could be used as metal sensors. Trace metal detection plays a vital role in the environment, the human body, and equipment safety. According to the Environmental Protection Agency (EPA), thirteen heavy metal ions are listed as "priority pollutants" because the toxicity of heavy metal ions caused a series of environmental problems. It is of significance to detect the content of alkali metals ions for human condition monitoring. This review aims to bring together the areas of metal-ligand coordination and π-conjugated systems. Materials based on metal-ligand coordination that show a polymerlike structure in solid-state refers to "metal-organic frameworks" (MOFs) (Kaneti et al., 2017;Ding et al., 2019), and in solution refers to "metallosupramolecular polymers" (Winter and Schubert, 2016). MOFs and metallo-supramolecular polymers will not be considered in this review because they have been reviewed a lot in recent years. Therefore, this review focuses on the ionochromic properties of π-conjugated polymers with organic ligand as receptors for metal ions.
IONOCHROMISM
Ionochromic effect based on π-conjugated polymers generates from induced conductivity fluctuations either by destroying the conjugation of polymers (conformational effect) or by lowering charge carrier mobility (electrostatic effect). The conductivity of π-conjugated polymers is highly sensitive to the nature and regiospecificity of the side chains, resulting in sensory signal amplification through energy-transfer along polymer chains. Figure 1B illustrated the energy-transfer process, in which upon the excitations, the energy may migrate along the polymer backbone due to the conjugation. As a result, the π-conjugated polymers act as a molecular wire, and the conjugated system generates a response more significant than that afforded by a small interaction in an analogous small mono-receptor system (Zhou and Swager, 1995).
Organic ligand containing conjugated polymers are receptors for metal ions. The coordinating interaction between metal ions and ligand causes electrostatic or conformational changes, resulting in an ionochromic effect. Ionochromic performance in π-conjugated systems is expected to find use in portable optical devices for the detection of metal ions and some organic cations. The electron effect and steric hindrance effect of the ligand are selective for the type of metal. The ionochromic phenomenon in the π-conjugated polymer will be introduced according to the type of ligands.
Crown Ether
Crown ethers were discovered by the Nobel Prize winner Charles Pedersen (Pedersen, 1967) more than 50 years ago. Recent progress in the design and applications of chemosensors based on crown ethers for small molecules has been reviewed . In contrast to small molecules, π-conjugated polymers have enormous advantages for sensing applications in terms of energy migration and facile exciton transport, which improve the electronic communication between receptors. Additionally, polymers could be processed into films exhibiting semi-permeability to ions. Herein, we focus attention on the design of crown ethers containing π-conjugated polymers and their applications in chemosensors.
Ionochromism was initially reported in the 1990s. Upon coordination with alkali-metal ions (K + , Na + , and Li + ), polythiophenes with crown ether side chains (1, Figure 2) (Bäuerle and Scheib, 1993;Marsella and Swager, 1993) underwent interesting sensory effect because of dramatic conformational changes of polymer chains. Casanovas et al. (Casanovas et al., 2009) studied the affinity of crown ether functionalized polythiophenes for Na + , K + , and Li + by quantum mechanical calculations. The results showed that although the association of Li + to the polythiophenes derivatives is entropically unfavored, the binding energies increased in the order of K + < Na + < Li + . The authors explain that the alkali ions with small dimensions underwent large fluctuations when the dimensions of the cavities changed, leading to an increase in thermal energy.
In addition to polythiophene, chemosensors based on other conjugated systems containing benzocrown or azacrown ethers were also developed. A multiple signal responsive chemosensor was realized by a poly(phenylene-ethynylene) polymer with pendent dibenzocrown groups (2), which was responsive to multi-excitation (K + , Cl − , pH, or temperature change) (Ji et al., 2013). Additionally, exposure of a 2based film to ammonia increased fluorescence, making it a good candidate for gas sensing. Morgado et al. studied the ionochromic properties of poly(p-phenylenevinylene) (PPV) with benzocrown ether (Morgado et al., 2000). Single-layer devices with Al cathodes showed higher electroluminescence efficiencies than those with Ca cathodes due to the existence of aggregates, induced by the crown ether side groups.
Due to the easy chemical access to modification, the functionalization of polypyrrole is widely studied by incorporating various active groups on the nitrogen atom. However, the modification of polypyrrole generated a loss of conjugation, resulting in extremely low conductivity for poly(N-substituted pyrroles), of the order of 10 −4 S/cm or less (Eaves et al., 1985;Bettelheim et al., 1987). Youssoufi et al. found that the equivalent 3-substituted pyrroles (3) gave rise to highly conducting polymer films, and they developed azacrown ether substituted polypyrroles with selective cation binding on voltammetric cycling in organic media (Youssoufi et al., 1993). In contrast to benzocrown ethers, a drawback of the azacrown ethers is that it exhibits low thermodynamic stability upon alkali and alkaline-earth metal ions (Ushakov et al., 2008). The main reason is that the planar structure of the junction section of the azacrown ether and benzene moieties. The increasing electron-withdrawing ability of the moiety conjugated to the crown ether is helpful for improving the thermodynamic stability. However, this method does not always lead to the expected enhancement of optical signal induced by metal ions but may conversely attenuate the signal (Izatt et al., 1996;Ushakov et al., 1997). Gromov et al. (Gromov et al., 2013) synthesized 4-pyridine-, 2benzothiazole-, and 2-, and 4-quinoline-based styryl dyes with an n-methylbenzoazacrown ether as a ligand. Electron spectroscopy studies showed that these compounds had a high sensibility for alkali metal and alkaline earth metal cations. In terms of electrochromism and cation binding capacity, they proved to be far superior to those based on phenylazacrown ether. After complexing with Ba 2+ , the fluorescence enhancement factor reached 61. The discovery of high levels of macrocyclic pre-organization is one of the factors that determine the high cation binding capacity of sensor molecules based on N-methylbenzoaza-crown ether.
Alkali-metal ions, especially K + and Na + , are the messengers of living cells, controlling a series of physiological processes through the action of ion channels. Crown ether containing π-conjugated polymers are highly sensitive to alkali-metal ions and could be designed to medical detectors. Nevertheless, most researches focused on the ionochromism of sensors operating in organic solvents rather than in aqueous solutions (Xiang et al., 2014), which is impractical for applications. Additionally, most reported ion-selective films require long incubation times to generate a detectable response, precluding their practical use (Giovannitti et al., 2016). Single-component π-conjugated polymers (4 and 5) were synthesized that respond selectively and rapidly to varying concentrations of Na + and K + in aqueous media, respectively (Wustoni et al., 2019). Using a miniaturized organic electrochemical transistor chip, variations in the concentration of these two metal ions in a blood serum sample could be measured in real-time. The devices based on these crown ether containing polymers are valuable for analyzing cellular machinery and detecting human body conditions that result in electrolyte imbalance.
Pyridine
The studies on crown ether substituted π-conjugated polymers have clearly demonstrated the ionochromism in alkali chemosensors. It is well-known that oligopyridine, such as bipyridine (bpy), terpyridine (tpy), and its derivatives exhibit super abilities to coordinate a large number of metal ions. If selecting oligopyridine as ligands for π-conjugated polymers, the range of metal ion sensors could be extended from alkali metal ion selective systems to transition metal ion selective systems. Additionally, pyridine and its derivatives not only have the electron-accepting ability to coordinate with metal ions, but also are reactive for metal complex-forming reactions, such as N-oxidation, N-protonation, and quaternization with RX, which can adjust their optical and electrical properties (Yamamoto et al., 1994).
In 1997, Wang et al. reported a transition metal-induced ionochromic polymer with bpy in the backbone (6) (Wang and Wasielewski, 1997). According to theoretical calculations, there is a 20 • dihedral angle between two adjacent pyridyl rings in bpy when it is in its transoid-like conformation (Cumper et al., With the addition of metal ions, the chelating effect of bpy ligand with the metal ions forces the pseudoconjugated conformation into a planar one, and thus makes the polymers fully conjugated, leading to the redshift in absorption spectra. Besides, incorporating bpy as a ligand directly into the backbone results in a more sensitive response with the addition of metal ions. Different linkers between bipyridine and conjugated polymer in the backbones cause differences in flexibility and rigidity of the resulting polymers. Bin Liu et al. studied the effect of linkages, including C-C single, vinylene, and ethynylene bonds, on the electronic properties and response sensitivities to metal ions (Liu et al., 2001). During the chelation with metal ions, the C-C single bond linkage provided better flexibility to the coplanarity of the pyridine unit. Therefore, C-C single bond possessed the highest sensitivity, and it was followed by vinylene bond, while ethynylene bond exhibited the lowest sensitivity. A conjugated polymer containing 2,6-substituted pyridine derivative (7) was synthesized for Pb 2+ sensing (Liu et al., 2011). With the addition of Pb 2+ , the color changed from yellow-green to brown, and this can be easily observed by the naked eye. The detection limit of the polymer is less than 1 ppm, while the threshold of Pb 2+ in drugs is 5-10 ppm. Therefore, 7 could be adopted to design an excellent sensor for Pb 2+ detection. Increasing the association constant of a molecular recognition event could improve the sensitivity of a sensor. Terpyridine (tpy) ligand possesses an excellent ability to coordinate various of metal ions with higher sensitivity than bipyridine. Zhang et al. prepared poly[p-(phenyleneethynylene)-alt-(thienyleneethynylene)] (PPETE) with bpy (8) and tpy (9) as receptors, respectively (Zhang et al., 2002). With the addition of Ni 2+ , 9 was quenched to 10.9% of its original emission intensity, while 8 was only quenched to 38.2%, illustrating that tpy was more sensitive to Ni 2+ ion. Rabindranath et al. reported tpy substituted polyiminofluorenes (10) (Rabindranath et al., 2009). Fe 2+ , Co 2+ , Ni 2+ , Cu 2+ , Pd 2+ , Fe 3+ , Gd 3+ , and Zr 4+ led to complete quenching of the green emission for 10, while Zn 2+ , Cd 2+ , and Eu 3+ caused a weak red emission, which was redshifted by 10-30 nm compared with pure 10. Additionally, a biscomplex formed upon the addition of Zn 2+ , leading to red luminescent precipitation. This effect can be used for the detection of Zn 2+ .
Although tpy exhibits high coordinating ability with a large number of metal ions, the preparation of tpy ligands is expensive and very time-consuming. In contrast to bpy, 2,6bis(10-methylbenzimidazolyl)pyridine (bip) ligands can be more easily synthesized on a large scale (Beck et al., 2005). Kalie Cheng et al. reported a polyimide fluorene (11) with bip ligand (Cheng and Tieke, 2014) and studied the optical properties with Zn 2+ and Cu 2+ . Due to charge transfer from ion-specific metal to ligand, the 11/Zn 2+ film is orange, while the 11/Cu 2+ film is purple. Due to the oxidation of the polymer backbone, the 11/Zn 2+ and 11/Cu 2+ films will turn blue if anodically oxidized to 750 mV vs. FOC., and the color change is reversible. The conjugated polymer with bip ligand exhibited high contrast and short switching times in color change upon 12 dipping cycles. However, the long-term stability of bip with metal ions is lower than that of tpy based systems.
Some biomacromolecules such as DNA, RNA, and proteins are easy to be inhibited by Pd 2+ in vivo and in vitro (Kielhorn et al., 2002). Additionally, Pd 2+ is able to elicit a series of cytotoxic effects, resulting in severe primary skin and eye irritations. It is essential to investigate a sensor for highly selective and sensitive detection of Pd 2+ . via Sonogashira reaction (Xiang et al., 2014). The conjugated backbones of 13 and 14 are twisted, which were proved to be selective for Ni + . In contrast to 13 and 14, 12 exhibited high selectivity for Ag + because of its linear conjugated backbone. Theoretically, the same functional group should have the same metal ion recognition capability. According to the ionochromic effect of 12-14, changes of linkage site for recognition groups resulted in different metal ion selectivity. Cyclic voltammetry measurement for 12-14 was carried out to analyze the cation selectivity by LUMO and HOMO energy levels. The LUMO levels of 12 are slightly lower than that of 13 and 14, indicating that their electron affinity is in the order of 12 > 14 ≈ 13. Additionally, the HOMO levels of 13 and 14 are slightly raised relative to 12, illustrating that the energy barrier of hole injection from the anode is in the order of 13 = 14 < 12. As a result, both electron and hole affinities of 12 are improved, resulting in enhanced carrier injection and transport. Moreover, smaller coordination cavity in 13 and 14 fits well with the size of Ni + because of their twisted conjugated backbones and smaller radius of Ni + . This work provided guidelines to tune the structure of conjugated polymers for the design and preparation of the selective metal ion sensors.
Despite the successful development of chemosensors in conjugated polymers, most of the examples are in the solution state, and seldom chemosensors in neat π-conjugated polymer films have been reported. This is because of strong interpolymer π-π interactions resulting in the self-quenching of luminescence in such a condensed solid-state phase (Sahoo et al., 2014). It is a non-negligible challenge to control such random and strong interactions in the solid-state. Hosomi et al. reported π-conjugated polymer with bipyridine moieties as ligand and permethylated α-cyclodextrin (PM α-CD) as the main chain (15). The PM α-CD suppresses the interactions between π-conjugated and enabled the polymers to show efficient emission even in the solid-state (Hosomi et al., 2016). Additionally, the metal-ion recognition ability of 15 is maintained in the solid-state, leading to reversible changes in the luminescent color in response to cations. The prepared π-polymer is expected to be applicable for recyclable luminescent sensors to detect different metal ions.
1,10-Phenanthroline 1,10-Phenanthroline (phen) is an electron-poor, rigid planar, hydrophobic, and heteroaromatic ligand that has played an important role in the development of coordination chemistry (Cockrell et al., 2008;Bencini and Lippolis, 2010;Iqbal et al., 2016). Phen is a bidentate ligand for transition metal ions whose nitrogen atoms are beautifully placed to act cooperatively in cation binding. In contrast to the parent bpy and tpy systems, phen is characterized by two inwardly pointing nitrogen donor atoms, which is held juxtaposed. As a result, phen is preorganized for strong and entropically favored metal binding. A luminescent phen-containing π-conjugated copolymer (16) responsive with Zn 2+ , Ir 3+ , and Eu 3+ was reported . The λ max of 16 is shifted from 385 to 404 nm on the addition of NiCl 2 . Photoluminescence intensity of 1 steeply decreases in the presence of Ni 2+ because of the chelating effect of the phen unit to Ni 2+ . Other metal ions also caused similar shifts of λ max . Mainly, Li + , Mg 2+ , Al 3+ , Zn 2+ , Ag + , and La 3+ caused a redshift of λ max to a smaller degree of about 10 ± 35 nm; Fe 3+ , Co 2+ , Ni 2+ , Cu 2+ , and Pd 2+ gave rise to a larger redshift by about 20 ± 50 nm and complete quenching of photoluminescence. This quenching phenomenon is related to an energy transfer from the π-conjugated polymer to the metal complexes. Yasuda group further synthesized a copolymer composed of alternating phen/9,9-dioctylfluorene (17) . The color of emitted light from the polymer complex could be tuned from blue to red by transition metal ions (Co 2+ , Ni 2+ , Cu 2+ , and Pd 2+ ) upon absorption spectra. Additionally, Satapathy et al. reported conjugated polymers containing phenanthroline that show remarkable sensing capabilities toward Fe 2+ (Satapathy et al., 2012).
Calixarene
Calixarenes have unique hole structures, which can be functionalized to recognize metal ions. Moreover, the hydrophobic cavity of the calixarene scaffold can accommodate various gases and organic molecules (Rudkevich, 2007). With the addition of metal ions or small molecules, calixarenes undergo dramatic geometric changes, including phenol ring flips between cone, partial cone, and 1,3-alternate conformational isomers (Gutsche, 1998). The small molecular of calixarene-based sensors for recognition of transition metal cations have been recently reviewed (Kumar et al., 2019), and here only list some calixarenebased conjugated polymer sensors. Calixarene-functionalized polymers (18 and 19) were first reported in the 1990s (Marsella et al., 1995). Binding constant measurements of the calixarenebithiophene generated a Ka (7.6 × 10 7 ) for Na +, which is approximately100 times stronger than K + and 40 times stronger than Li + . A stronger binding constant means higher sensitivity. Ion recognition behavior of 18 and 19 toward Li + , Na + , and K + was analyzed by UV-vis absorption and fluorescence emission spectroscopy. The resulting polymers exhibit good selectivity toward Na + , with a 24 nm blue shift for 66 and 32 nm red shift for 67.
Other calixarene-based receptors in π-conjugated polymers were reported (Wosnick and Swager, 2004;Costa et al., 2008), in which the calixarene groups were mainly as pendant groups. The direct attachment of the calixarene unit (at the upper rim) to a conjugated polymer (20) has also been reported (Yu et al., 2003). The conical configuration of calixarene makes the polymer chain segment a zigzag orientation. The segmental structure in 20 imposes great localization of the carriers, and the rapid self-exchange between discrete units causes the conductivity of such a segmented system. Protonation promoted the electron exchange resulted in high conductivity for 20. Hence, electroactive calixarene polymer that requires protonation to be highly conductive was prepared, which is useful for the design of actuating materials. A fluorescent polymer (21) in which calixarene scaffolds are the part of uninterrupted linear polymeric backbone was first reported (Molad et al., 2012). Short conjugated fragments combined with the nonlinear geometry gave rise to rather moderate sensitivity with selected stimuli. The coordination of the calixarenes in the π-conjugated polymers allows for the recognition of small molecules, such as NO.
Imidazole
Imidazole-based ligands are widely used due to their reversible fluorescence. This reversibility is realized by protonation/deprotonation upon an acid/base or metallation/demetallation with metal ions/suitable counter ligands (Jiang et al., 2010). As an important functional conjugated polymer material, polydiacetylene (PDA) has received more and more attention since the first report in 1969 (Wegner, 1969). PDA has significant color conversion and fluorescence enhancement under various environmental stimuli, including heat (Takeuchi et al., 2017), organic solvent (Yoon et al., 2007), bioanalyte (Zhou et al., 2013), ion , and so on. In response to different stimuli, PDA can be changed to different colors, such as purple, yellow, orange, or red, of which the transition from blue to red is the most common type. Due to their spontaneous color change and fluorescence emission development under stimulation, many PDA liposomes with specific receptor groups have been designed and widely used to detect metal ions such as Hg 2+ and Cu 2+ (Lee et al., 2009;Xu et al., 2011).
An imidazole-functionalized disubstituted polyacetylene (22) was prepared (Zeng et al., 2008). 22 was not sensitive to alkali and alkaline earth metal ions, and transition metals Cd 2+ , Mn 2+ , Ag + , and Zn 2+ , because of the poor coordination ability of the imidazole receptor with these ions. Nevertheless, Pb 2+ , Al 3+, and Cr 3+ could quench the fluorescence of 22 not completely, while Cu 2+ , Co 2+ , Fe 2+ , Fe 3+, and Ni 2+ could quench its fluorescence more efficiently. Particularly, Cu 2+ quenched the fluorescence entirely at a very low concentration (1.48 ppm). Satapathy et al. reported imidazole-based polymers (23-25) that present significant ion recognition ability toward Fe 2+ in semi-aqueous solutions (Satapathy et al., 2012). The fluorescence lifetime of polymer 25 (11.4-fold) decreased larger than that of 23 (4.6-fold) and 24 (6.2-fold) further, illustrating that 25 showed superior sensing capability by virtue of its stronger molecular wire effect. The fluorescence of these three polymers recovered by adding phenanthroline or Na 2 -EDTA. Additionally, the selectivity of 23-25 for Fe 2+ interaction was not interfered by other competing metal ions.
CONCLUSIONS AND OUTLOOK
π-Conjugated polymers represent useful chemical platforms for the design of chemosensors for metal ions. In this review, we have summarized the types and characteristics of functional groups that chelating with different metal ions as well as the ionochromic effect of the π-conjugated polymers based on these functional groups. In the past few decades, significant progress has been made in the development of novel chemosensors in environmental protection, food and drug testing, and human health monitoring. Although there has been a lot of research on these materials, preparing chemical sensors with high sensitivity, long-term stability, and selectivity is still a critical challenge. The chemical and physical relationship between ligand and metal coordination also needs to be further studied to improve the theoretical guidance for the preparation of metal sensors.
AUTHOR CONTRIBUTIONS
JL and JH: conceptualization and design. LH and ZW: acquisition of data. JG: software. JL and LH: analysis of data. JL: drafting of article. JL, LH, JG, JH, and ZW: final approval of manuscript. All authors: contributed to the article and approved the submitted version. | 5,266.8 | 2020-10-02T00:00:00.000 | [
"Chemistry"
] |
lOOKiNG fOr ThE E-COMMErCE QuAliTY CriTEriA : diffErENT PErSPECTiVES
To assure a successful development of e-commerce, it is necessary to define the criteria that could guide the choice of a competitive e-commerce system and its quality evaluation. The aim of this study was to bring together and analyze the e-commerce quality criteria proposed in the scientific literature, by e-commerce experts, and important for customers. Analysis allowed to identify a triple set of e-commerce quality criteria, as well as the differences among the criteria provided by invoked sources.
Introduction
With the internet penetrating daily life, a new business area has emerged: now we have an electronic market which develops quite rapidly and tends to impact economy more and more.The statistically remarkable growth of electronic commerce (hereinafter e-commerce) shows businessmen to have realized that e-commerce allows business to enlarge its market on the international level through the internet.Aiming to reinforce business competitiveness and to make it successful in the global market, it is necessary to analyse the quality characteristics of e-commerce.
The universally accepted ISo 9000 standard presents quality as a relative conceptrelative to a set of clear requirements.Thus, the quality of e-commerce depends on a set of its characteristics and requirements and shows how well the features comply with the defined requirements.However, in the e-commerce context, there is no broadly accepted set of requirements there are no guidelines or rules on how to assess e-commerce quality, to choose an appropriate system or improve it, whereas those who are designing the e-commerce model and implementing IT solutions should get a whole amount of information concerning the properties of the implemented solution, its potential impact on the business perspectives and the related risks.Thus, before the e-commerce system is implemented by a company, it must be thoroughly analysed: it is important to define and the possibility to compare goods and prices.As to the offline area, he mentioned the importance of conformance to the delivery conditions and the possibility to contact a consultant.Liu and Arnett (2000) named the key website quality dimensions concentrated mainly on the online environment of e-commerce: the quality of information and services, functionality of the system, and quality of design.Zeithaml et al. (2000), in addition to reliability, usability, system functionality, flexibility, safety, price-communication and aesthetics, proposed to evaluate the reactivity of the website, accessibility, warranty (which influences trust), and personalisation.Szymanski and Hise (2000) dedicated more attention to the safety of transactions, the customer's comfort, information (quality and quantity), offer, and aesthetic details of design.Sang and Young (2001) defined as important criteria the pleasure and comfort of use, the reliability of the system, the speed of its actions, the quality of information, i.e. the aspects that are evidently important for the website but insufficient for evaluating the quality of the whole e-commerce system.Rolland (2003) mentions the findings of Boonghee and Nonthu (2001) who for the evaluation of e-commerce quality proposed to consider the aesthetics of the website, the usability, price competitivity, brand image and uniqueness of the offered products, warranty, safety of transactions, and reaction to customers' queries.Meantime, Galan and Sabadie (2001) write more about the localization of the offer proposed to the consumer through the e-commerce system (suitability of the offer to the target market), accessibility, technical issues, interactivity, warranty, personalization and aesthetics.
Many authors identified the important aspects of e-commerce and websites basing on the customers' surveys.For example, Vidgen and Barnes (2001) interrogated 1013 respondents and revealed the following e-commerce aspects of importance: quality of information, quality of interaction (which manifests through trust and empathy), usability, and design.Another poll of the same coverage (1013 respondents) was performed by Yoo and Donthu (2001); apart the usability and reaction, they identified such criteria as aesthetics, offer uniqness, price competitivity, certification (warranty) of production quality, the company's image, and the system's safety.Based on the users' opinion poll (1013 respondents), Janda et al. (2002) made a list of quality criteria "IRSQ", which includes the e-shop performance, accessibility, the system's safety and reliability, sensation, and the characteristics of provided information.other authors, e.g., Wolfinger and Gilly (2002), have elaborated a ComQ list of criteria based on the 1013 respondents' opinion poll; the list is limited to reliability, design, online support, safety, and confidentiality criteria.A larger poll (2071 respondents) was performed by Srinivasan et al. (2002); it allowed the authors to identify eight evaluation criteria: personalization, interaction, loyalty support, attention to the customer, strong community, usability, choice, and pleasant navigation.The aspects linked with customer satisfaction were noticed also by Reibstein (2002) who proposed to evaluate the user-friendliness of the ordering system, usability, assortment, information about offered production, data safety, price, delivery terms, and additional services.Bressolles ( 2002) orients e-commerce quality evaluation to the quality of offer, the ergonomics and design of the e-shop, interactivity, reliability, and warranty.Aladwani and Palvia (2002) proposed an integrated fourdimension (technical quality, general content quality, specific content quality, appearance quality) view of the website quality.As a result of the research carried out by Aladwani (2006), the technical quality was found to be the only dimension of the website quality to influence purchasing.Thus, Aladwani (2006) supposed that the customer's decision to buy depends on the perception of the technical e-commerce system quality.
Wolfinbarger and Gilly ( 2003) developed the EtaiQ model in which the analysis of website performance is based on four dimensions: the design of interface, services, reliability, conformity of delivery with announced terms, and safety.A longer list of evaluation criteria was proposed by Rolland and Wallet-Wodka (2003): they identified such quality dimensions as visibility (e-shop is easy to find), usability, aesthetics, offer quality, interactivity, personalization, safety, information important for choice and stimulation of purchase, reliability, authority (image), client support.The aspect of visibility identified by Rolland and Wallet-Wodka (2003) appears a bit earlier in the model for website quality evaluation, called VPTCS (Visibility, Perception, Technique, Content, and Services) proposed by Sloïm (2001).The quality of visibility is defined as the ability of the website to be found by a potential user.The quality of perception as an ability of the website to be used and properly perfected by the user covers the ergonomics, design, and usability.The quality of techniques is understood as the ability of the website to behave in the way foreseen by developers and expected by users.The quality of content covers the adequacy and clearness of provided information, content localization, legibility, relevance to reality, etiquette.The quality of services is the ability to provide useful services: e-logistics, organization of delivery in good time, technical support, etc.Another system of e-solutions' evaluation -Netqual -assesses quality based on five dimensions (Bressolles, 2004): usability, reliability, design, safety and confidentiality, information, i.e. evaluation is performed by the criteria oriented directly to the comfort and safety of the user.
With the development of e-commerce and other e-services, the Servqual method by Parasuraman et al. (2005), largely discussed in the literature, was adapted for the evaluation of e-service quality; it covers four main dimensions: efficiency, system availability, fulfilment, and privacy.Such a model was created specifically for assessing the quality of service delivered by e-shops.
The review of literature shows similar online aspects of e-commerce, raised by different authors: design (Liao et al., 2006;Jin and Park, 2006), quality of information provided in the website (Liao et al., 2006;Lin, 2007), safety (Shin and Fang, 2006;Jin and Park, 2006), technical characteristics, usability, and reactivity of the system (Liao et al., 2006;Shin and Fang, 2006).In addition, Jin and Park (2006) introduce the aspects of order fulfilment, communication, and advertising.Nowadays, the number of requirements for e-commerce is increasing, and the requirements extend from safety context to the context of comfort and practicability.For example, Isaac and Volle ( 2008) have defined four e-commerce analysis axes: communication, management, offer, and profitability.As the criteria they proposed the quality of offer (assortment and price) and of provided information, the functionality of website, interactivity, reaction speed, content updating, safety of transactions and personal data, and visibility (e. g., e-shop position in the search engines).Apart from the named criteria, Isaac and Volle (2008) particularly emphasized the usability, modes and price of delivery, and additional services.Liang and Chen (2009) findings are based on a survey of 656 respondents, which identified the aspects of e-commerce mostly oriented to the user's comfort: relevance of information, proposed assortment, usability, user-friendly navigation, reliability, client relations.
It would be fair mentioning that not only scientists but also commercial structures create models of e-shop quality evaluations: e. g., Bizrate.com, Gomez.com, CIo.com Temesis.com etc., but no one of them is universally used.
A review of quite vast essays found in the scientific literature shows that there is no common and universally accepted way to evaluate e-commerce quality; moreover, in most of essays there is no indication of how to improve the e-commerce system after quality assessment (Guseva, 2010).The analytical basis for the identification of relevant criteria of e-commerce quality evaluation was inspired by the publications' network principle, i.e. by searching for the pertinent papers referred to in the articles concerning the quality of e-solutions.It was examined through selection of the aspects mentioned by authors as quality criteria.The interactive coding method was invoked when both the implicit and explicit appearance of e-solutions' quality criteria in texts had been noted; thus, a high generalization level was achieved.Considering the performed review of the literature and employing the method of conceptual content analysis, the criteria mentioned in the literature more than once were selected and ranged by mentioning frequency (Table 1).
Such a list allowed identifying the most frequent e-commerce quality criteria mentioned in the literature.The software quality and online elements of the e-commerce system dominate in the web solution quality models.The concept of "empathy" used in Table 1 must be clarified: according to Suslavičius ( 2006), human emotions are associated with thinking, beliefs and desires, i.e. people with empathy usually can more accurately describe the direction of human thoughts and prejudices.Thus, in the context of e-commerce, we may conclude that empathy can be seen as the ability of the system's developers to understand and realize the aspirations and wishes of the client.So, clients feel understood and satisfied while using the system.
After the conceptual content analysis, the criteria were grouped by topics, and the topics (not separate criteria) were ranged by frequency.We can estimate the importance
e-commerce quality element
Mentioned Information -the originality, relevancy to the context, reliability, clearness of provided information and the ability of its content to meet customer expectations Usability 1 -flexibility, ergonomics, intuitivity of navigation Safety of transactions and personal data -the ability to ensure data safety, which is especially important in online payment Reliability -the ability to observe promises and engagements
Interactivity and communication -the characteristics of online communication, the intensity of interaction and its results
Access -the visibility of the e-shop in the internet, accessibility of provided information and workability of the used format Efficiency -the ability of the e-commerce system to meet customer's expectations and to reach the goal Offer -the part of content, which is particularly important for an e-shop; it must stimulate purchases, give to the customer all unerring information needed to induce him to purchase
Warranty, goods returning conditions -characteristics of post-purchase services
Loading time (speed) -technical characteristic of an e-shop, manifested through e-shop displaying speed and time of page loading Trust -predisposition of the user to trust the information provided by a particular e-shop; this predisposition is often impacted by e-seller image in the user's conscience; this element impacts the predisposition to buy
Services -characteristics of the concomitant services
Price -reasonable pricing, clear summary of whole price components Delivery -the quality of delivery regarding time, mode and form Aesthetics -a subjective characteristic, the general impression about the e-shop.This element is impacted by the visual e-shop component and by the style of goods' presentation Assortment -the quality and breadth of range of goods offered by an e-shop, the strategy of offer Online support -the possibility to get help online, the efficiency and quickness of resolving problems Customizing (personalization) -the technical characteristic which allows to identify a particular user and to customize the system according to his preferences and needs Image -the impression made by the e-seller and the e-shop; a subjective cha racteristic Empathy (3) and Pleasure (1) -the capability of e-seller to foresee and meet the needs of the customer in such way that the customer feels understood and satisfied while using the e-shop; the generation of positive emotions Utility (serviceableness) -the ability of the e-commerce system to help users to reach their goals: for the customer -to buy, for the seller -to sell Reactivity -the quickness of response to customer's requests, orders; feedback 1 Usability here is understood as the extent to which an e-solution, e-system or information is ready to be used.
of each topic defined by a "resumptive characteristic" and to eliminate the risk of criteria overlapping.
The The list of criteria grouped by topics shows the topic's criteria most often considered by researchers and developers and those deficient of attention.The e-commerce quality criteria grouping shows that the most often mentioned criteria are linked with the online content, technical characteristics, and particularities of the e-commerce system usage.Client relation management (both in online and offline environments) and factors of loyalty creation are in the middle of ranking.The aspects of user perception are mentioned less often, and e-commerce components of real environment are least invoked for e-commerce quality evaluation in the reviewed literature.
It is important to emphasize a special position of safety: this criterion, even not grouped with the others, is placed on the 3 rd line of the list of solitary criteria.This shows a particular importance of safety, especially in e-commerce with online paying (Guseva, 2009).
Considering the fact that the analytical basis covers the sources of a long period, the actual most relevant e-commerce quality criteria should be reviewed nowadays; thus, the experts' survey was performed to refine the actual quality criteria.
E-commerce quality criteria indicated by experts
The refining of e-commerce quality criteria was performed with the help of reconnaissance survey of experts' opinion, conducted in November 2008 -January 2009.Aiming to ensure the equality of scientific and practical approaches, an equal number of experts from scientific and practical fields was chosen.The opinions on e-commerce quality criteria were expressed by 8 experts (from France (5), Lithuania (2), and Russia (1)) who were picked for the survey considering their input in scientific publications and good practice in developing e-commerce.The results of experts' survey are presented in Fig. 1 in which separate experts were visualized in circles, and the arrows show the e-commerce elements chosen by each expert as critical for e-commerce.
The mentioned e-commerce elements are grouped according to their appearance in the purchase process -from sales to post-purchase services.On the first customerbusinessman contact point, the experts highlighted the importance of e-shop quality, the quality of e-offer; also the number of e-purchases can be treated as an e-commerce quality indication.At the payment stage, payment security was evaluated as most important (the valid certificate of security was indicated as a possible security proof), as well as the payment alternatives' variety was treated as a significant issue for the positive perception of the payment organization quality.Four of the eight experts indicated the clearness of the operational payment system as a relevant criterion of e-payment evaluation.At the delivery stage, the delivery term and mode were presented equally in experts' considerations, as well as the provision of client support during delivery was highlighted by invoked experts.For the post-purchase service quality assessment, the term of provided warranty and service point dislocation criteria were indicated.Some of the experts mentioned the importance of client support atn the post-purchase stage.Thus, the experts consider the client support as indispensable at the offline e-commerce stages (delivery and post-purchase stages), which take place after payment.
At the first stage of e-commerce, the content quality (e-offer quality) prevails; experts' opinion on the importance of payment security is similar to the e-payment priorities found in the literature.Offline e-commerce stages are insufficiently considered in the literature, but they attracted the experts attention.Thus, to enrich the knowledge about the relevant criteria of e-commerce quality evaluation from e-customers' point of view, an e-customers' survey was performed.
Customers' approach to the e-commerce systems' quality characteristics
A survey of e-customers' opinion was conducted in 2009.It was focused on the e-customers potentially interested in purchasing from Lithuanian e-shops.Since the geographical location of Lithuania in the logistics context is favourable for transactions both with the EU and the CIS countries, along with the answers of e-customers from Lithuania, also answers from the EU and CIS countries were accepted in the random sample of 204 filled questionnaires.The response rate reached 32.7% of the persons that had received the questionnaire.One third (33%) of the sampled questionnaires was filled by e-customers from Lithuania, 36% from CIS countries (mostly from Russia), and the rest from other European countries.The frequency of purchases in the sample can be divided into three groups: about 23% of respondents purchased online every month, near a half (49%) of respondents made online purchases every 3-6 months, and others less than once per six months.Thus, the results of this survey were influenced mostly by the opinion of moderately active buyers and partly by the opinion of respondents with a greater and lesser than average purchase frequency.Considering that quite a large variety of goods can be sold online, the respondents were asked to indicate which goods they purchased online; these were mostly books, household goods and electronics which are successfully marketed via traditional sales channels, while a quarter of the respondents purchased online games (4%), software (5%) and media content (music, video, audio books, etc. -16%), i.e. the production that can be supplied online via the internet and does not demand physical delivery.Thus, there is a reason to believe that, with such a structure of the e-shopping revealed by this survey, not only the general e-shopping preferences, but also specific aspects related to delivery and post-purchase service quality will be revealed.
Respondents were asked to assess the importance of proposed e-commerce elements on a 9-point scale.The mean values of evaluations given by customers are provided in Fig. 2.
FIG. 2. Mean values of e-commerce elements' importance (compiled by author)
6. 84importance of e-commerce aspects to e-customers 6.55 6.47 6.17 6.14 5.57 8.24 7.65 6.84 5.13 7.74 7.63 6.86 6.83 6.77 6.72 6.71 6.38 6.34 In the online ordering step, we can identify three main units of e-sales organization important for customers: technical-logical e-shop implementation (via navigation and technical characteristics), e-shop special-purpose adaptation (via offer and localization), and e-shop social role (via communication possibilities).The character of data of this part of analysis shows the customers to be most undecided about the assessment of the importance of online support in e-shopping.The analysis based on the customers' purchasing frequency shows differences in the perception of online support importance: if we divide the judgements by customer e-shopping experience (very experienced customers -purchase online every month and more often, middle-experienced customerspurchase online once in 3-6 months, and least experienced customers -purchase online less than every 6 months), we will see that the less experienced in e-shopping customers are, the more important for them is online support when choosing and ordering goods online (the mean values of online support importance for each group of customers are 5.1, 5.5 and 5.9, respectively).Thus, there is a need to customize the e-shop structure and e-offer content in a comfortable, readable, and easily understandable way, as well as to ensure the possibility of a dialog on the e-shop; this is important for stimulating the interest and purchase.
According to the survey results we can identify two main poles of payment organization: payment safety (via ensuring data safety and user trust) and comfort (via providing alternatives, clearness, and support).The respondents indicated more payment peculiarities important for them: the interactive notification of payment status, the confirmation of payment receipt, a flexible discount system, and a clear procedure of prepayment refund in case of order cancelling.The wish of the customer to pay in the most secure way possible dominates.E-customers prefer to pay on delivery when they are sure that the e-seller fulfils the engagements.Moreover, the fees for the way e-customers pay are of fundamental importance when choosing among payment alternatives.The survey results show the more significant importance of client support on payment for mistrustful or inexperienced e-buyers (importance of support is 5.91 from 9 points for inexperienced e-customers vs. 4.83 for experienced ones).However, the clearness of payment was evaluated as more important than client support on payment; in such a situation, it would be sufficient to ensure the clearness of the payment system in order to lower the need for online support.
Analysis of the scientific literature has showed that offline e-commerce elements lack attention and are poorly explored (Guseva, 2009).Thus, the next part of the survey, which covered the offline processes of e-commerce, was especially sightworthy.The main questions that must be answered before delivery concern the practical aspects of delivery: "when?", "how?", "how much?", and "where to get support in case of troubles?"If the e-customer finds answers to these questions, it is likely that he or she will make the order.If the real delivery will correspond to the answers found before ordering, it is likely that the e-customer will return with a new order to this e-shop.The similarity of importance of all delivery aspects indicates that for the respondents important are not only separate characteristics of the delivery, but their complex.The complex assessment of the delivery through relative values such as the ratio of the delivery term and price, etc. could be more useful than the analysis of separate indicators of delivery.
As concerns the post-purchase service, customers evaluated the importance of its main aspects quite similarly (Fig. 2).Customers want to have a possibility, when needed, to use their rights provided by warranty.They need quite a high level of indemnity in case of online purchase.The main accent was put on the possibilities provided to the e-consumer (possibility to get a refund, to return the product, to get an assistance, etc.), but not on the extent of warranty; for example, 76% of respondents indicated that they were satisfied with the same warranty term as provided by conventional shops, 7% were satisfied with the same warranty but would like to get a longer one, and 17% were awaiting for a longer warranty term in case of e-purchase.In a special field for respondents' comments, some indicated as important for them to have a consultation on the purchased product's exploitation or help in resolving the other related issues in the physical seller's office or by online support service.Analysis of this section shows the customers' need to be in contact with the seller after purchase to ask for help or to lay claim in case of possible problems.Here it is important to mention that long-term post-purchase service can be less actual on the market of perishable goods (flowers, meals, etc.); the longer the lifecycle of a product after purchase, the more important postpurchase services and long-term warranty become.
Summary of identified e-commerce quality criteria
Upon considering three different sources of information, the identified e-commerce quality criteria were summarized in Table 2.
It can be considered that the aspects of e-sales organization quality mentioned in the literature, by experts and customers are quite similar; this fact facilitates the selection of the most actual criteria.At the first, e-ordering, stage, the following quality criteria were identified: usability (navigation), e-shop technical quality, the content of e-offer and localization, the possibility of interaction through e-shop; e.g., on the e-shopping start-point the e-customer expects a customized environment with the simulations of live communications typical of the shopping in a substantial store.The payment safety criterion is mentioned by all of the considered sources; however, the experts and the customers indicated the payment comfort aspects: requirements for payment safety can be complemented by requirements for payment clearness and alternatives.Thus, the customers extended requirements from those linked to payment safety to those linked to payment comfort.
The information on the possible criteria for evaluating the offline part of e-commerce, provided in the considered literature, is quite poor and imprecise; the experts' and customers' surveys enriched the list of relevant criteria for delivery and post-purchase service quality evaluation.The quality of delivery organization can be evaluated by delivery term, price, modes' variety, and client support.These elements reflect practical e-customers' expectations concerning delivery -the moment when the customer meets the seller's representative in real life.As concerns the post-purchase service, the warranty was figured in all contributions; other guarantees are required as well and can be considered as quality indications: the conditions of refund and product's returning, client support after purchase, which determines the long-term client-business relationship and client loyalty.It shows the e-customer's need for a continuous business relationship with the e-seller and for the whole complex of post-purchase services similar to ones available in conventional shops.
Conclusions
Clear and united criteria of e-commerce quality make easier the choice of a trustworthy partner in the internet, which is often difficult due to the misleading virtual image.Moreover, a benchmarking, when several partners' or competitors' systems are compared online, is more efficient through reasoned criteria of evaluation.With a view to provide the core for the successful development of e-commerce, its quality criteria were defined, basing on a triple approach: literature analysis, experts' opinions, and e-customers' preferences and requirements.
The literature analysis has shown that there is a vast range of criteria proposed for e-solution quality evaluation, but none of the models is prevailing.However, due to the noted similarities among the models, the main groups of criteria to be applied for e-commerce quality evaluation were identified.The analysis has shown that the most often mentioned criteria are linked with the online content, technical characteristics and particularities of the e-commerce system's usage, while the aspects of client relation management and user perception are mentioned less often, and e-commerce offline aspects are least invoked in the reviewed literature.Considering the large time interval covered by the analyzed literature, the set of actual e-commerce aspects requires a refinement; therefore, the experts' and e-consumers' surveys enriched the set of relevant criteria grouped by four e-commerce stages.Thus, in the website content aspect mentioned in the literature, the experts accentuate the importance of e-offer characteristics.All sources mention the system's usability and technical characteristics.The customers' survey raised the question of the social role of e-shop coming into play through client support, customer reviews and social networks.Moreover, experts' and customers' surveys show that not only safety, but also the aspects of payment comfort (variety of alternatives, clearness), which are not mentioned in the analysed literature, must be considered in e-commerce quality evaluation.The offline e-commerce processes are often ignored in the literature; however, our survey has demonstrated that the offline characteristics of e-commerce are of no less importance than the online ones.on the contrary, e-customers have a set of quite clear and important for them practical requirements, which can serve as the quality criteria for offline e-commerce stages, such as concrete conditions of delivery and postpurchase support.
It can be summarized that with the evolution of e-commerce, the requirements for it evolve as well -from safety to comfort and mutual profitability.The need for permanent client support is remarkable and increasing after payment: the client relation support of high quality, which provides business with customer's trust and loyalty, is one of the essential parts of a successful e-commerce development.
TABLe 1 . The ranged list of e-commerce quality criteria
(compiled by author) 1 resumptive characteristics by mentioning frequency were ranged as follows:
TABLe 2 . Summary of identified e-commerce quality criteria
(compiled by author) | 6,395.2 | 2011-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Association of Serum Brain-Derived Tau With Clinical Outcome and Longitudinal Change in Patients With Severe Traumatic Brain Injury
Key Points Question Are levels of serum brain-derived tau (BD-tau) at admission associated with clinical outcome and long-term change in patients with severe traumatic brain injury (sTBI)? Findings In this cohort study of 39 patients with sTBI, the mean fold difference in serum BD-tau concentrations on day 0 for patients with sTBI with unfavorable clinical outcomes vs those with favorable clinical outcomes 1 year after the injury was higher than the mean fold differences in serum total tau and phosphorylated tau231. Serum BD-tau demonstrated slower clearance from the blood (56.6% of baseline levels remaining by day 7) compared with total tau and phosphorylated tau231, which had only 19.0% and 7.5% of baseline levels, respectively, remaining at day 7. Meaning This study suggests that concentrations and longitudinal trajectories of serum BD-tau differ among patients with sTBI depending on clinical outcome; serum BD-tau could be used as an accessible biomarker to monitor clinical outcome in patients with sTBI at admission and 7 days after the injury.
Introduction
Traumatic brain injury (TBI) is one of the leading causes of morbidity, disability, and mortality across all ages. 1,2Around the world, more than 50 million individuals are affected by TBI every year. 2 Posttraumatic complications of TBI can range from minor neurological and psychosocial problems to long-term disability, 3 making it crucial to follow up with patients after injury to ascertain longitudinal outcomes.
Traumatic brain injury is often classified as mild or severe according to the intensity of the injury. 4Severe TBI (sTBI) can be more life threatening and has lower rates of survival. 2In clinical settings, sTBI is commonly classified using the Glasgow Coma Scale (GCS) at admission to the hospital, while the Glasgow Outcome Scale (GOS) is used to assess long-term clinical outcome. 5reover, structural damage after sTBI may be detected by neuroimaging techniques. 6Despite the proven effectiveness of these approaches, they have limitations in providing biochemical brainrelated changes reflected in the bloodstream within a few hours after trauma.Circulating blood biomarkers provide biochemical information and prognostic insights into clinical severity to guide patient management and monitor long-term outcome. 4rum total tau (T-tau) is one of the most well-characterized biomarkers for sTBI, 4,7,8 showing high increases within hours of the injury. 7However, studies have suggested that current assays for T-tau quantify both central nervous system (CNS) and peripheral tau when measurements are performed on blood (serum or plasma) samples. 9,10Therefore, we hypothesized that a blood-based biomarker that is selective for CNS tau will be more accurate at reflecting the brain-associated tau released into the bloodstream while avoiding potential influences from peripheral tau.To this end, we evaluated the association of the novel brain-derived tau (BD-tau) marker 11 with baseline clinical severity and longitudinal outcome compared with T-tau in serum samples from participants with sTBI followed up clinically over a 1-year period.We also examined changes in serum phosphorylated tau 231 (p-tau 231 ) and neurofilament light chain (NfL) concentrations as other neuronal injury-related markers.
Study Cohort, Design, and Outcome
This study included 42 participants (39 with data on all 4 serum biomarkers) from the prospective Swedish TBI Neurointensivvårdsavdelning cohort of patients with sTBI who were receiving clinical care at the Sahlgrenska University Hospital, Gothenburg, Sweden, and followed up for 1 year. 12,13rticipant recruitment, clinical assessments, and blood sample collection were performed between September 1, 2006, and July 1, 2015.The inclusion criteria were (1) TBI with a GCS score of 8 or less on admission, (2) admission to the neurointensive care unit within 48 hours of head injury, (3) aged 18 years or older, (4) acceptance from next of kin to participate in the study, and (5) residence in Sweden for 12 months of follow-up.The exclusion criteria included no provision of informed consent, known history of neurological and/or autoimmune disease, and pregnancy.The ethics committee at the University of Gothenburg approved the study.Written informed consent was obtained from the Traumatic brain injury outcome was clinically assessed with the GOS at 12 months 5 ; those with a GOS score of 1 to 3 were classified as having an unfavorable outcome, and those with a GOS score of 4 to 5 were classified as having a favorable outcome.The 12-month outcome assessments were collected using a mixed-methods approach, including interviews performed either in person or via telephone.For participants with substantial impairment, their proxies were interviewed.
There were 39 participants on day 0, 39 on day 7, and 15 on day 365.Loss at follow-up was mainly due to death or disability, particularly in the group with unfavorable outcomes.
Blood Sample Handling and Biomarker Measurements
Serum samples were obtained at the indicated time points according to standard procedures and stored at −80 °C until use.Serum BD-tau and p-tau 231 were measured on the Simoa HD-X platform (Quanterix) using validated in-house assays, 11,14 and T-tau and NfL with Quanterix assays (Nos. 101552 and 103670, respectively).
Statistical Analysis
Biomarker measurements and statistical analyses were performed between October and November 2021 at the University of Gothenburg with Prism, version 9.3.1 (GraphPad).The distributions of data sets were examined for normality using the Kolmogorov-Smirnov test.Because the data were nonnormally distributed, nonparametric tests were used, and continuous data are presented as median (IQR) values.To compare serum biomarker levels between 2 groups (ie, unfavorable and favorable outcome groups at each time point), the mean fold differences (95% CI) were calculated and statistical comparisons examined using the Mann-Whitney test.For examining biomarker levels at all 3 time points (days 0, 7, and 365) within the whole cohort or the specific outcome groups, the Kruskal-Wallis test with the Dunn multiple comparison was used.P values (including those adjusted for multiple comparisons) were considered significant at the 2-sided P < .05level.
Cohort Characteristics
The study included 42 participants with at least 1 biomarker measured at baseline.
Serum BD-Tau Levels in sTBI Clinical Outcome Groups on Admission and 7 Days Later
Initial levels of BD-tau (on days 0 and 7) were associated with GOS outcome at 1 year.Thus, on day 0, mean (SD) serum BD-tau levels were higher in the unfavorable outcome group (191.4 [190.8][from 44.0 to 13.0 pg/mL]) outcome group (Table ).However, despite the decrease in concentrations, the mean differences between outcome groups were similar at days 0 and 7.By day 365, serum BD-tau levels in both groups had further decreased to concentrations that were much lower than the corresponding day 0 and day 7 levels (Figure , A).
Serum T-tau and p-tau 231 levels were also decreased between days 0 and 7 ( , B and C).Another distinction from BD-tau levels was that the mean differences between groups on days 0 and 7 tended to vary (Table ).Because of the decrease in concentration for T-tau and p-tau 231 from day 0 to day 7, the between-group mean differences on day 7 vs day 365 were similar.
Discussion
In the present study, we found that serum BD-tau level could have utility for evaluating clinical outcome in sTBI, both on the day of the event and 7 days later.These results, which were not observed for serum T-tau level, suggest that the selective measurement of tau of CNS origin in the bloodstream has the capacity to improve the accuracy of the clinical outcome and management of sTBI.In agreement with recent findings indicating that current blood-based T-tau assays quantify tau of both CNS and peripheral origin and that the latter makes up approximately 80% of T-tau signal in the bloodstream, 15 our results suggest that CNS tau differences in groups of patients with sTBI and in different clinical outcomes can be masked if a nonselective blood-based tau assay (ie, T-tau) is used.
In addition, the inability of p-tau 231 and NfL to differentiate between the clinical outcome groups suggests their limited value for the clinical evaluation of sTBI, despite their well-validated functions for Alzheimer disease pathophysiology and general neurodegeneration, respectively. 14,16e results indicate that all 3 tau biomarkers (BD-tau, T-tau, and p-tau 231 ) are released from the brain into the bloodstream within minutes to hours of sTBI, possibly due to the opening of the blood-brain barrier.This initial increased release of both total (unphosphorylated) and phosphorylated forms of tau agrees with previous reports showing that brain trauma leads to the rapid release of tau of various molecular forms into extracellular fluids. 7,8The consistent longitudinal reduction in these biomarker levels was due to a lack of replenishment of the initial (day 0) signals during physiologically regulated tau turnover. 7Serum T-tau was cleared much more rapidly (81% removed by day 7) than BD-tau, which could be explained by the ratio of CNS tau to peripheral tau in the bloodstream, which was in favor of CNS tau on day 0 (due to increased release of brain tau) returning to pre-sTBI levels over time.However, BD-tau, which exclusively quantifies brainoriginating tau, 11 showed that CNS tau is not cleared so quickly and that substantial amounts do remain for up to 1 year.This slow clearance of BD-tau proved useful for the clinical monitoring of outcome and recovery after sTBI.For example, while considering T-tau alone might suggest recovery by day 7 (due to significantly decreased levels of BD-tau compared with day 0), BD-tau indicates otherwise because the levels were still statistically indifferent from day 0 regardless of clinical outcome.Continuous evaluation of BD-tau levels between days 7 and 365 would be informative to ascertain the point at which the decrease was significantly lower compared with days 0 and 7 and whether patients with favorable outcomes reached this point earlier than those with unfavorable outcomes.We also anticipate that individuals with mild TBI will show faster decreases in BD-tau compared with those with sTBI.Finally, NfL had a different trajectory, similar to previous reports, 12 suggesting slower release into the bloodstream with the tau markers.However, the peaking of the signal at 7 and its difference from day 365 suggests its value for outcome monitoring after sTBI.
Strengths and Limitations
This study has some strengths, including the longitudinal design and that multiple biomarkers were compared head to head.It also has some limitations, including the lack of sampling time points between days 7 and 365, as well as the restriction of the study to patients with sTBI without including those with mild TBI.Moreover, many participants were lost at follow-up due to death and disability.
Despite the important and novel information provided in this study, the results should be
JAMA Network Open | Neurology
Brain-Derived Tau and Longitudinal Change Among Patients With Severe Traumatic Brain Injury In this cohort study of 39 patients with sTBI, the mean fold difference in serum BD-tau concentrations on day 0 for patients with sTBI with unfavorable clinical outcomes vs those with favorable clinical outcomes 1 year after the injury was higher than the mean fold differences in serum total tau and phosphorylated tau 231 .Serum BD-tau demonstrated slower clearance from the blood (56.6% of baseline levels remaining by day 7) compared with total tau and phosphorylated tau 231 , which had only 19.0% and 7.5% of baseline levels, respectively, remaining at day 7. Meaning This study suggests that concentrations and longitudinal trajectories of serum BD-tau differ among patients with sTBI depending on clinical outcome; serum BD-tau could be used as an accessible biomarker to monitor clinical outcome in patients with sTBI at admission and 7 days after the injury.
Longitudinal trajectories of serum biomarker levels following severe TBI, in the whole cohort and according to clinical outcome.The plots show the median serum concentrations of brain-derived tau (BD-tau) (A), total tau (T-tau) (B), phosphorylated tau 231 (p-tau 231 ) (C), and neurofilament light chain (NfL) (D).In each plot, the serum biomarker values at different time points (on days 0, 7, and 365) are shown for the whole cohort (left) as well as in the 2 clinical outcome groups.
JAMA Network Open | Neurology Brain
-Derived Tau and Longitudinal Change Among Patients With Severe Traumatic Brain Injury
by a University College London User on 07/23/2023
The longitudinal trajectory of serum NfL was different from the longitudinal trajectories of the tau-based biomarkers.Instead of decreasing from day 0 to day 7, serum NfL increased by 255.5% (from 86.8 to 308.9 pg/mL).There were increases of 156.0%(from 91.1 to 233.2 pg/mL) and 343.5% (from 85.8 to 380.5 pg/mL) from day 0 in the favorable and unfavorable outcome groups, respectively (Tableand Figure).The highest levels were recorded on day 7 in the whole cohort and in both clinical outcome groups, with increased mean differences between days 0 and 7 (Table).The levels decreased by 97.0% (from 308.9 to 9.2 pg/mL) from day 7 to day 365 (TableandFigure, D). | 2,928.8 | 2023-07-01T00:00:00.000 | [
"Biology",
"Medicine",
"Psychology"
] |
Fiber Loop Mirror Based on Optical Fiber Circulator for Sensing Applications
In this paper, a different Fiber Loop Mirror (FLM) configuration with two circulators is presented. This configuration is demonstrated and characterized for sensing applications. This new design concept was used for strain and torsion discrimination. For strain measurement, the interference fringe displacement has a sensitivity of (0.576 ± 0.009) pm‧με−1. When the FFT (Fast Fourier Transformer) is calculated and the frequency shift and signal amplitude are monitored, the sensitivities are (−2.1 ± 0.3) × 10−4 nm−1 με−1 and (4.9 ± 0.3) × 10−7 με−1, respectively. For the characterization in torsion, an FFT peaks variation of (−2.177 ± 0.002) × 10−12 nm−1/° and an amplitude variation of (1.02 ± 0.06) × 10−3/° are achieved. This configuration allows the use of a wide range of fiber lengths and with different refractive indices for controlling the free spectral range (FSR) and achieving refractive index differences, i.e., birefringence, higher than 10−2, which is essential for the development of high sensitivity physical parameter sensors, such as operating on the Vernier effect. Furthermore, this FLM configuration allows the system to be balanced, which is not possible with traditional FLMs.
Introduction
The Fiber Loop Mirror (FLM) is one of the most flexible configurations in optical systems. With applications in lasers, as well as in sensors, they can be used as mirrors [1] or as optical filters [2]. As a mirror, usually implemented in mode-locked fiber lasers [3], two rings are formed by connecting the coupler's parallel ports [4] or cross ports [5], where at least one of them is composed of a nonlinear medium. These configurations are called Nonlinear Optical Loop Mirrors (NOLM). As a mirror device, this can also be constituted of just a ring without a nonlinear medium, where the reflectivity is provided by polarization control [6].
As an optical filter, the FLM is only composed of a ring containing a piece of highbirefringence (Hi-Bi) optical fiber [7]. Through a polarization controller, the clockwise (CW) and counterclockwise (CCW) propagation modes acquire different phases due to the birefringence of the fiber, allowing the generation of an interference pattern when they are overlapped in the coupler. When the Hi-Bi optical fiber is exposed to a temperature or strain variation, it implies a variation of the phase difference between the two beams and a spectrum variation is consequently obtained. This allows the application of FLMs as intensity or interferometric sensors for either single or simultaneous measurements, exhibiting a maximum strain sensitivity of the order of tens of pm/µε and a temperature sensitivity of a few nm/ • C. In addition, they can also be used as optical gyroscopes [8].
With the emergence of the Vernier effect applied to optical sensors in 2011 [9], several optical fiber sensors were used, and the FLM was no exception [10]. This phenomenon consists of the overlapping of the interference fringes of two interferometers, whose optical path difference is close to each other or close to multiple originating two waves: the carrier and the envelope [11]. Usually, sensors work based on the traditional or enhanced Vernier effect [12], where the envelope variation is measured, as it is the one that shows the highest sensitivity. In 2020, with an FLM of two Hi-Bi fibers, a new record was achieved, a sensitivity of 10 000 pm/µε, where the "push-pull" concept was implemented in order to achieve the enhanced Vernier effect in an optimized way [13]. This required the implementation of two Hi-Bi fibers with a length longer than 1 m so that it was possible to achieve the colossal sensitivity with a 100 nm bandwidth erbium source. Despite the colossal sensitivities achieved, the need for interferometers with low Free Spectral Range (FSR) implies the implementation of long fibers due to the low birefringence of the current Hi-Bi fibers, where solid fibers have a birefringence of the order of 10 −4 and PCF fibers have a birefringence of the order of 10 −3 .
This work consists of exploring a new FLM configuration with two circulators in series. This new configuration allows a typical ring interferometer to be balanced, which is not the case with traditional FLMs. The difference in arms is 20 mm in distinct output ports. In addition, this system is more versatile, since the FSR of the system also depends on the length difference of the two single-mode fibers, a situation that does not occur in traditional FLMs. Furthermore, FSR also depends on the difference in the refractive indices of the two optical fibers. The torsion and strain application will result in the change of distinct spectrum features, allowing the simultaneous measurement of the two physical quantities.
Theoretical Consideration
In the presented system, the FLM has two circulators and two single-mode optical fibers ( Figure 1). The beam is first divided in two at the fiber coupler, CW and CCW. Because of the isolation of the optical circulators, the CCW beam only travels along the bottom optical fiber (forming the inner ring) and the CW beam travels along the top fiber (forming the outer ring). In addition, a polarization controller is also used to ensure maximum interference [14,15]. exhibiting a maximum strain sensitivity of the order of tens of pm/με and a temperature sensitivity of a few nm/°C. In addition, they can also be used as optical gyroscopes [8].
With the emergence of the Vernier effect applied to optical sensors in 2011 [9], several optical fiber sensors were used, and the FLM was no exception [10]. This phenomenon consists of the overlapping of the interference fringes of two interferometers, whose optical path difference is close to each other or close to multiple originating two waves: the carrier and the envelope [11]. Usually, sensors work based on the traditional or enhanced Vernier effect [12], where the envelope variation is measured, as it is the one that shows the highest sensitivity. In 2020, with an FLM of two Hi-Bi fibers, a new record was achieved, a sensitivity of 10 000 pm/με, where the "push-pull" concept was implemented in order to achieve the enhanced Vernier effect in an optimized way [13]. This required the implementation of two Hi-Bi fibers with a length longer than 1 m so that it was possible to achieve the colossal sensitivity with a 100 nm bandwidth erbium source. Despite the colossal sensitivities achieved, the need for interferometers with low Free Spectral Range (FSR) implies the implementation of long fibers due to the low birefringence of the current Hi-Bi fibers, where solid fibers have a birefringence of the order of 10 −4 and PCF fibers have a birefringence of the order of 10 −3 .
This work consists of exploring a new FLM configuration with two circulators in series. This new configuration allows a typical ring interferometer to be balanced, which is not the case with traditional FLMs. The difference in arms is 20 mm in distinct output ports. In addition, this system is more versatile, since the FSR of the system also depends on the length difference of the two single-mode fibers, a situation that does not occur in traditional FLMs. Furthermore, FSR also depends on the difference in the refractive indices of the two optical fibers. The torsion and strain application will result in the change of distinct spectrum features, allowing the simultaneous measurement of the two physical quantities.
Theoretical Consideration
In the presented system, the FLM has two circulators and two single-mode optical fibers ( Figure 1). The beam is first divided in two at the fiber coupler, CW and CCW. Because of the isolation of the optical circulators, the CCW beam only travels along the bottom optical fiber (forming the inner ring) and the CW beam travels along the top fiber (forming the outer ring). In addition, a polarization controller is also used to ensure maximum interference [14,15]. The electric field of the system's arms can be described as: And where E in is the initial electric field amplitude, β is propagation constant and β = 2πn/λ, n is the refractive index, λ is wavelength and L is the fiber length. The intensity is given by: As seen in Figure 2, the interference of two waves results in a spectrum that can be described as a cosine. For this case, the spectral intensity is given by [16]: Sensors 2023, 23, x FOR PEER REVIEW 3 of 10 The electric field of the system's arms can be described as: where Ein is the initial electric field amplitude, β is propagation constant and β = 2πn/λ, n is the refractive index, λ is wavelength and L is the fiber length. The intensity is given by: As seen in Figure 2, the interference of two waves results in a spectrum that can be described as a cosine. For this case, the spectral intensity is given by [16]: Thus, the system has a spectrum similar to an optical cavity. This system allows measuring the difference in torsion and strain between the arms where the opposite arm to the action of the physical quantity is the reference. Thus, in the CCW optical path, the strain variation is applied and is the reference for the torsion measurement. The torsion variation in the CW optical path is applied and is the reference for strain measurements. The variation in torsion will imply a variation in interferometer visibility due to the polarization rotation of one light beam. The variation in strain will change the optical path difference, which will result in a frequency variation associated with the interference pattern, typically studied by varying the spectrum extremes.
The general equation to discriminate two distinct parameters, the frequency shift of the FFT maxima (Δf), and the corresponding amplitude variation (ΔA), is , , Δε Δ (5) Thus, the system has a spectrum similar to an optical cavity. This system allows measuring the difference in torsion and strain between the arms where the opposite arm to the action of the physical quantity is the reference. Thus, in the CCW optical path, the strain variation is applied and is the reference for the torsion measurement. The torsion variation in the CW optical path is applied and is the reference for strain measurements. The variation in torsion will imply a variation in interferometer visibility due to the polarization rotation of one light beam. The variation in strain will change the optical path difference, which will result in a frequency variation associated with the interference pattern, typically studied by varying the spectrum extremes.
The general equation to discriminate two distinct parameters, the frequency shift of the FFT maxima (∆f ), and the corresponding amplitude variation (∆A), is where K i,j with i = ε,τ and j = f,A are the linear fit slope, that, for this system, are shown on Section 3.2. So, the system equation is
Results
The system was characterized for strain and torsion variations. This section will consist of three parts: test overview, characterization with a ∆L = 20 mm and ∆n = 0, and with ∆L ≈ 0 and ∆n = 0, where ∆L is the length difference between two arms and ∆n is refraction indices difference between the two arms of the interferometer.
Test Overview
The experimental setup, based on Figure 1, has the ability to measure strain and torsion. This system was composed of a 3 dB optical coupler (2 × 2), two optical circulators, a polarization controller, standard optical fiber (SMF28e) and PS1250/1550 optical fiber, an erbium source with a 70 nm bandwidth and centered at 1565 nm, and an optical spectrum analyzer (OSA) (model YOKOGAWA AQ5370C) with a resolution of 0.02 nm; connections were made with a conventional splice machine (model Sumitomo Electric Type-72C, Osaka, Japan).
All measurements were made at a room temperature of 20 • C. For strain characterization, a conventional micrometric translation stage was used. The stage measurement step is 50 µm and the fiber length where the strain was applied was 430 ± 1 mm, so the strain measurement step was 116 µε. The strain measurement range was between 0 mε and 1.67 mε. For torsion characterization, a torsion stage was used. The measuring step was 5 • , the torsion measure was between 0 • and 180 • and the fiber length where the torsion is applied was 50 ± 1 mm.
For ∆L = 20 mm and ∆n = 0
In the first step, the system was set up in such a way that the refractive index of the interferometer arms was equal and the difference in arm lengths between the two circulators was 20.2 ± 0.1 mm. Each physical quantity was measured based on the perturbation of only one of the system's arms, where strain was applied to the CCW state and torsion was applied to the CW state. Figure 3 shows the output spectrum of the system and its FFT. where Ki,j with i = ε,τ and j = f,A are the linear fit slope, that, for this system, are shown on Section 3.2. So, the system equation is
Results
The system was characterized for strain and torsion variations. This section will consist of three parts: test overview, characterization with a ΔL = 20 mm and Δn = 0, and with ΔL ≈ 0 and Δn ≠ 0, where ΔL is the length difference between two arms and Δn is refraction indices difference between the two arms of the interferometer.
Test Overview
The experimental setup, based on Figure 1, has the ability to measure strain and torsion. This system was composed of a 3 dB optical coupler (2 × 2), two optical circulators, a polarization controller, standard optical fiber (SMF28e) and PS1250/1550 optical fiber, an erbium source with a 70 nm bandwidth and centered at 1565 nm, and an optical spectrum analyzer (OSA) (model YOKOGAWA AQ5370C) with a resolution of 0.02 nm; connections were made with a conventional splice machine (model Sumitomo Electric Type-72C, Osaka, Japan).
All measurements were made at a room temperature of 20 °C. For strain characterization, a conventional micrometric translation stage was used. The stage measurement step is 50 μm and the fiber length where the strain was applied was 430 ± 1 mm, so the strain measurement step was 116 με. The strain measurement range was between 0 mε and 1.67 mε. For torsion characterization, a torsion stage was used. The measuring step was 5°, the torsion measure was between 0° and 180° and the fiber length where the torsion is applied was 50 ± 1 mm.
For ΔL = 20 mm and Δn = 0
In the first step, the system was set up in such a way that the refractive index of the interferometer arms was equal and the difference in arm lengths between the two circulators was 20.2 ± 0.1 mm. Each physical quantity was measured based on the perturbation of only one of the system's arms, where strain was applied to the CCW state and torsion was applied to the CW state. Figure 3 shows the output spectrum of the system and its FFT. In the characterization, both the frequency shift of the FFT maxima (Δf) and the variation of the corresponding intensities amplitude (ΔA) were determined. In addition, the In the characterization, both the frequency shift of the FFT maxima (∆f ) and the variation of the corresponding intensities amplitude (∆A) were determined. In addition, the interference fringe shift was also studied when the strain was varied. The sensitivities associated with the data presented in Figure 4 are shown in Table 1. All of the linear fits have an R-squared value (r 2 ) equal to or higher than 0.99.
To demonstrate the equation, several simultaneous measurements were obtained and the equation was applied (see Figure 6). In this system, the frequency variation of the spectral fringes occurs only with the variation of the strain. The amplitude variation depends on both the torsion and the strain, being strongly dependent on the torsion when compared to its dependence on the strain. Thus, with the application of Equation (7), it is possible to achieve simultaneous strain and torsion measurements. A phase variation that can be characterized through the common analysis of the shift of the spectra's extremes or based on the variation of the FFT peak frequencies. From Table 1, it can be noted that the frequency shift achieved with strain variation is 100 dB higher when compared to the torsional variation of the system. The amplitude variation of the FFT peaks, which correspond to the visibility of the spectrum, achieved in the torsional variation, is 33 dB higher when compared to the strain variation of the system. While strain measurement is an interferometric measurement, since there is a change in the fringe's spectral frequency (i.e., variation of the extremes of the spectrum), torsion measurement is an intensity measurement, since there is a linear rotation of the polarization of one arm of the interferometer, and there is no variation of the interferometer's phase.
For ΔL ≈ 0 and Δn ≠ 0
In a second step, the use of arms of equal length but with different refractive indices was also explored. For this, the system was initially balanced (Figure 7). An FSR of 15.5 nm at a wavelength of 1531.4 nm was obtained, which implies that there is a length dif- A phase variation that can be characterized through the common analysis of the shift of the spectra's extremes or based on the variation of the FFT peak frequencies. From Table 1, it can be noted that the frequency shift achieved with strain variation is 100 dB higher when compared to the torsional variation of the system. The amplitude variation of the FFT peaks, which correspond to the visibility of the spectrum, achieved in the torsional variation, is 33 dB higher when compared to the strain variation of the system. While strain measurement is an interferometric measurement, since there is a change in the fringe's spectral frequency (i.e., variation of the extremes of the spectrum), torsion measurement is an intensity measurement, since there is a linear rotation of the polarization of one arm of the interferometer, and there is no variation of the interferometer's phase. In a second step, the use of arms of equal length but with different refractive indices was also explored. For this, the system was initially balanced (Figure 7). An FSR of 15.5 nm at a wavelength of 1531.4 nm was obtained, which implies that there is a length difference between the interferometer arms of 0.1 mm; this is the value used as the measurement uncertainty. Two fibers with a length of 120 mm and different refractive indices were then added. An FSR of 4.5 nm at a wavelength of 1539.9 nm was obtained, which implies a ∆n of (4.4 ± 0.2) × 10 −3 .
Discussion
This system allows the simultaneous measurement of two physical quantities. The advantage of having two distinct arms with similar characteristics, if both arms are exposed to the same variation of a physical quantity (for example, temperature), the impact is practically null, as the optical path variations associated with each arm are identical. In addition, with a difference in arm lengths similar to the length of Hi-Bi fiber used in a traditional FLM [7], it is possible to obtain a much higher phase difference, which will allow the development of optical strain sensors that use the more sensitive Vernier effect and with a shorter fiber length, in addition to the fact that the whole system is built only with single-mode fiber. In addition, FSR control has also been demonstrated with interferometer arms of equal length but different refractive indices, which will accentuate the interferometer's phase difference. Finally, the possibility of this new FLM configuration (ring interferometer type) allowing system balancing was demonstrated.
Discussion
This system allows the simultaneous measurement of two physical quantities. The advantage of having two distinct arms with similar characteristics, if both arms are exposed to the same variation of a physical quantity (for example, temperature), the impact is practically null, as the optical path variations associated with each arm are identical. In addition, with a difference in arm lengths similar to the length of Hi-Bi fiber used in a traditional FLM [7], it is possible to obtain a much higher phase difference, which will allow the development of optical strain sensors that use the more sensitive Vernier effect and with a shorter fiber length, in addition to the fact that the whole system is built only with single-mode fiber. In addition, FSR control has also been demonstrated with interferometer arms of equal length but different refractive indices, which will accentuate the interferometer's phase difference. Finally, the possibility of this new FLM configuration (ring interferometer type) allowing system balancing was demonstrated.
Conclusions
This optical fiber system allows simultaneous measurements of strain and torsion. In the case of strain, this will imply a phase variation of the interferometer, which will correspond to a variation of the spectral frequencies of (−2.1 ± 0.3) × 10 −4 nm −1 µε −1 , which is 100 dB higher when compared to the torsion variation. In addition, phase variation will also result in a spectrum shift of (0.576 ± 0.009) pm·µε −1 . As for the torsion variation, since the rotation of the polarization will occur, then there will be a cosine variation of the spectrum visibility. This can be measured by the varying amplitude of the FFT peaks, which has achieved an amplitude variation of (1.02 ± 0.06) × 10 −3 / • , which is 33 dB higher than that achieved with the strain variation. It has also been shown that the system allows the FSR to be controlled by the difference in the refraction indices of the two arms, in addition to the system's ability to be balanced; the latter feature is not possible with traditional FLMs with Hi-bi fiber.
The ability to achieve low FSR with reduced fiber lengths will allow great strain sensitivity to be achieved with the implementation of the enhanced Vernier effect based on the push-pull method. | 5,128.2 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Particle dynamics modeling methods for colloid suspensions
We present a review and critique of several methods for the simulation of the dynamics of colloidal suspensions at the mesoscale. We focus particularly on simulation techniques for hydrodynamic interactions, including implicit solvents (Fast Lubrication Dynamics, an approximation to Stokesian Dynamics) and explicit/particle-based solvents (Multi-Particle Collision Dynamics and Dissipative Particle Dynamics). Several variants of each method are compared quantitatively for the canonical system of monodisperse hard spheres, with a particular focus on diffusion characteristics, as well as shear rheology and microstructure. In all cases, we attempt to match the relevant properties of a well-characterized solvent, which turns out to be challenging for the explicit solvent models. Reasonable quantitative agreement is observed among all methods, but overall the Fast Lubrication Dynamics technique shows the best accuracy and performance. We also devote significant discussion to the extension of these methods to more complex situations of interest in industrial applications, including models for non-Newtonian solvent rheology, non-spherical particles, drying and curing of solvent and flows in complex geometries. This work identifies research challenges and motivates future efforts to develop techniques for quantitative, predictive simulations of industrially relevant colloidal suspension processes.
academic perspective and for their relevance in technological applications. Although the study of colloidal suspensions extends more broadly to different phases of the dispersed and dispersion media (e.g. foams, emulsions, gels, aerosols) [94,189], we focus exclusively on solid particles dispersed in a liquid medium (sols). For the purposes of this work, we define the colloidal regime based on a length scale that is much larger than molecular dimensions, but small enough that thermal fluctuations as well as colloidal forces (i.e. van der Waals dispersion and long-range electrostatic forces) are significant. For typical systems, this translates to effective particle diameters on the order of several tens of nanometers to several microns (∼0.1-2 μm). Traditional application areas for such colloidal suspensions include sedimentation of clays [54,166], oil recovery and transport [143,213], and rheological properties of paints, cosmetics, adhesives, food products, etc. [44,53]. More recently, there has been renewed interest in particle suspensions due to their presence in biological systems (e.g. red blood cells and platelets in blood [43,84,148,177], microorganisms in biological suspensions [61], proteins and nucleic acids in aqueous environments [108], drug encapsulation and delivery [157]) as well as their applications in nanotechnology and nanomanufacturing processes (e.g. nanoparticle-reinforced composites [76,215], battery electrodes [236], control of optoelectronic properties by specific arrangements of colloidal particles [195], sol-gel synthesis [28]).
From an applications perspective, the goal of studying colloidal suspensions is to understand, predict and control suspension rheological properties and particle microstructure. In applications where the desired product is a liquid suspension (e.g. cosmetics, adhesives, paints, food products), rheological properties are directly relevant to product performance; in cases where the solvent is cured or removed, the rheological properties of the suspension are critical to manufacturing processing conditions, whereas the final (dry) microstructure largely governs product properties and performance. For example, in injection-molded parts, particulate composites and thin film coatings, the spatial and orientational distribution of particles can drastically affect product properties.
However, both rheology and microstructure formation are complex, non-equilibrium phenomena. A more fundamental process that governs both of these is colloid diffusion. Although it may not be directly relevant in some applications, we argue that diffusion, which is a dynamic, equilibrium process, fundamentally underlies rheological properties and microstructure formation, and must be well-understood before these more complex phenomena can be addressed. Furthermore, diffusivity is in principle a readily measurable quantity, and diffusion is sufficiently complex to reflect many of the relevant underlying physics (hydrodynamic interactions, colloidal forces). Colloidal diffusion processes have also been studied extensively by theory, experiments and simulations. We therefore identify diffusion, rheological properties and microstructure formation as the three main properties of interest in the study of colloidal suspensions.
One of the difficulties in studying colloidal systems lies in obtaining accurate information on disparate time and length scales, which is essential to understanding macroscale behavior. Common experimental techniques include various forms of radiation scattering (e.g. X-ray, neutron, laser), confocal microscopy, fluorescence recovery after photobleaching (FRAP), pulsed-field-gradient nuclear magnetic resonance (PFG NMR), and a host of conventional techniques for probing rheology and imaging microstructures. With regard to dynamics, light scattering techniques provide extensive information on the spatial and temporal correlations between particles, typically in the form of structure factors and intermediate scattering functions [14,56,151,183]. However, with increasing complexity (e.g. particle aggregation, polydispersity, complex particle shapes and flow geometries), interpreting these quantities becomes more difficult, and significant limitations exist with regard to particle sizes, volume fractions and accessible time scales. Confocal microscopy techniques paired with image recognition algorithms have emerged as an important tool for directly probing the dynamics and structural evolution of colloidal systems [39,46,55,230], but also have limitations with regard to particle size, time scales and various limiting requirements for the optical properties of the particles and solvent (e.g. equal refractive indices). PFG NMR [158,209], FRAP [20,102] and rheology techniques [77,155,188,189] provide key information about average dynamical properties such as diffusion coefficients and frequency-dependent rheological data, but cannot directly probe dynamics at the individual particle scale, and the interpretation of microstructure evolution is often not straightforward. Computer simulations at the mesoscale, where colloidal particles are treated as dis-crete elements much larger than the atomic scale, have therefore emerged as an important additional tool for the study of colloidal suspensions, both to aid in the interpretation of experimental data and in some cases as standalone predictive techniques.
The focus of this work is to discuss and evaluate several computer simulation methods relevant to colloidal suspensions. We do not attempt a comprehensive literature review of all such techniques, as the field is quite large; instead, we focus on several methods that are primarily off-lattice and particle-based, where the colloidal particles are always treated explicitly (sometimes referred to as a Lagrangian approach), while the solvent is treated either as a separate set of particles (explicit) or incorporated in the colloid-colloid interaction (implicit). The methods we have selected are Fast Lubrication Dynamics (FLD) [31,124,126], which is a recently-developed expedient approximation to Stokesian dynamics (SD) [25][26][27]; multi-particle collision dynamics (MPCD) [92,164,169] originally known as stochastic rotation dynamics (SRD) [144]; and dissipative particle dynamics (DPD) [87,95]. We largely exclude several important classes of colloid suspension simulation techniques, including lattice-based treatments of the solvent (in particular, Lattice-Boltzmann techniques [1,37,57,128]; field-based or Eulerian treatments of the particles, where a spatially-varying continuous density variable is used rather than explicit particles to represent the solid phase [64]; and a host of continuum Navier-Stokes-based techniques for treatment of the solvent, including various finite element, finite difference, volume-of-fluid, boundary element and spectral methods [66,78,80,83,96,134,163]). That said, we do cite important work in these areas in Sect. 5 of this paper, as often they offer the only route to address challenging applications. We also exclude particle-based continuum treatments of the solvent (sometimes referred to as "meshless methods" [13,161]), such as smoothed particle hydrodynamics (SPH) [82,159], the element-free Galerkin method (EFGM) [12,73], the reproducing kernel particle method (RKPM) [137,138] or the finite point method (FPM) [167]. To our knowledge, these latter methods have not been applied to the study of colloidal suspensions, and their potential in this area remains largely unexplored. All of the aforementioned techniques have proved extremely useful in many cases, and we discuss them briefly where appropriate.
We note that many excellent reviews of computer simulation techniques for colloidal suspensions similar to those on which we are focusing already exist in the literature. Barnes et al. [9] provided an early survey of particle-based simulation methods for dense suspension rheology, but did not carry out a quantitative comparison of the various methods, and several currently popular techniques (DPD, MPCD) had not been developed at the time of their work. Brady and Bossis [27] and Foss and Brady [75] have summarized the development and extensive applications of the Stokesian Dynamics method, which has been highly successful for a broad range of problems. Harting et al. [91] reviewed the use of MPCD and Lattice-Boltzmann methods for particle suspensions, and provided useful guidelines as to which method is more appropriate for various physical situations. The review by Dunweg and Ladd [57] of Lattice-Boltzmann methods for colloidal suspensions also gives an excellent overview of DPD, MPCD and SD-based methods and the underlying theory. Van der Sman has presented an excellent survey of methods that include Lattice-Boltzmann, a variety of continuum-based treatments of the solvent as well as DPD and MPCD for suspension flows in confined geometries [207]. His work provides qualitative insights into matching these diverse methods with various length scales and flow regimes, and suggests that the Lattice-Boltzmann method may be the most versatile for complex geometries over multiple length scales. More recently, Dickinson [52] has reviewed the state of the art for a number of particle-based simulation methods and the insights they provide on colloid aggregation and phase behavior.
However, the reviews listed above rarely report quantitative comparisons of simulation methods for the same system in a manner that allows for direct critical evaluation of their accuracy. Several authors have recently presented rigorous quantitative comparisons of some of these methods: Batôt et al. [11] have compared colloid diffusion and conductivity for neutral and charged particles using MPCD and Brownian dynamics (BD). They employed two different collisional coupling schemes for MPCD, and BD with no hydrodynamic interactions as well as using a Felderhof hydrodynamic tensor [70]. Tomilov et al. [217] quantified various colloid aggregation properties using BD, BD with a Yamakawa-Rotne-Prager hydrodynamic tensor [234] and MPCD. Schlauch et al. [193] compared finite element, Lattice-Boltzmann, and Stokesian dynamics methods for their ability to reproduce forces and torques on assemblies of stationary colloid particles, but did not investigate dynamic suspensions. In previous work [197], we reported a comparison of MPCD, FLD and DPD in the context of diffusion and rheological properties for a system of colloidal polystyrene particles in aqueous solution. The simulation methods were tested briefly against a system of hard-sphere colloids, but the emphasis was on their ability to reproduce the experimentally measured properties of the polystyrene-water system. In this work, we delve into greater detail for the hard-sphere colloid system with an emphasis on colloid diffusion characteristics, and explore additional methodological details for all of these simulation techniques. All the methods that we discuss have been incorporated into the same parallel molecular dynamics software package (LAMMPS [176]), which will allow for an even more direct comparison, including computational performance. Furthermore, we devote substantial discussion in Sect. 5 to the feasibility of extending these techniques for predictive simulations of more complex physics and realistic manufacturing processes. Overall, it is hoped that this work will help identify areas of deficiency for the various techniques discussed and motivate additional method development efforts.
A high-level summary of the key findings of this work is presented in Table. 1. The table shows that while implicit solvent methods are more accurate for the simple systems addressed in this paper, explicit methods offer far greater promise for more advanced applications. The underpinning reasons for this will become clearer in the sections that follow. Specifically, in Sect. 5 we discuss the various extensions of these techniques to more complex physical situations. We include this table here as motivation for what follows, as well as a convenient summary of outstanding research challenges in the field.
The remainder of the paper is organized as follows: Sect. 2 presents a historic perspective on key developments in colloid suspension theory and an overview of the dominant physics involved in colloidal suspension modeling. Section 3 discusses the details of the different simulation techniques that we focus on in this work (FLD, MPCD and DPD), including a section devoted to the efficient implementation of simulation algorithms for colloidal systems. In Sect. 4, we present simulation results from our work as well as from the literature. We emphasize diffusion of hard-sphere colloids as a key metric for the accuracy of various simulation methods, but also discuss predictions of rheological behavior and equilibrium colloid particle microstructure. In all cases, we provide an evaluation of the different simulation methods, including accuracy, flexibility and computational speed. Section 5 is devoted to advanced capabilities standing in the way of extension of these methods to complex applications. Finally, we conclude with a summary of our findings, some of the outstanding challenges, and future directions for particle-based simulations of colloids.
Historical perspective
Seminal work in the theory of colloidal suspensions dates back to investigations of single-particle Brownian motion by Einstein [62] and Smoluchowski [227], and subsequent experimental validation by Perrin [171], all of which were instrumental in confirming the atomic nature of matter. Subsequent theoretical studies on the equilibrium properties of colloidal suspensions by Derjaguin and Landau [49] as well as Verwey and Overbeek [224] focused on understanding the origins of interactions between colloidal particles. This resulted in the Derjaguin-Landau-Verwey-Overbeek Table 1 Summary of the current capabilities of simulation methods treated in this work. Coloring indicates the relative state of development for various capabilities (green-most mature; yellow-moderately mature; red-least mature). (Color table online) (DLVO) theory, which describes forces of molecular origin between bodies immersed in an electrolyte, and can be used to quantify several key features of colloid aggregation. With regard to dynamic properties, extensive theoretical developments towards Fokker-Planck formulations of single-and multi-particle colloidal systems appear in several sources [34,50,147]. While the direct use of such formulations is generally unwieldy, they provide a rigorous starting point and a justification for the Langevin and generalized Langevin equations that form the basis of many current simulation algorithms. Additionally, several approximations for colloid diffusion coefficients and suspension viscosity have been derived theoretically for simple systems in the dilute limit [10]. Unfortunately, for most practical applications of interest, the complexity of realistic colloidal suspensions precludes the direct use of a purely theoretical treatment.
Computer-aided simulation has greatly accelerated the understanding of colloidal systems. The typical computational framework consists of an N -body Newtonian dynamics solver to treat the colloidal particles (i.e. a discrete element model, or DEM framework), coupled with a solver for the hydrodynamic effects of the suspending fluid. Origi-nal algorithms for DEM simulations date back to molecular dynamics (MD) simulations of simple Lennard-Jones fluids in the late 1950s [2], but a fully atomistic treatment of a colloidal suspension still remains largely intractable due to the vast separation in length and time scales between solvent molecules and colloidal particles. Typically, the solvent degrees of freedom are coarse-grained in some fashion. Seminal work by Ermak and McCammon resulted in a Brownian dynamics algorithm for the simulation of a collection of spherical particles, with various approximations for hydrodynamic interactions [65]. A significant refinement of the Ermak and McCammon algorithm came in the form of Stokesian dynamics (SD) [25][26][27]59]. SD includes a near-field correction to the interparticle hydrodynamic interaction as well as improvements to the far-field interaction by accounting for torques and stresslets. Both the Ermak-McCammon algorithm and Stokesian dynamics algorithms treat hydrodynamics based on multipole expansions to solve the Stokes equation in the presence of moving spheres. Despite often prohibitive computational costs, these methods have proved to be remarkably accurate and extremely useful, and more recent accelerated versions [8,154,200] as well as simplified versions such as Fast Lubrication Dynamics (FLD) [126] show even greater promise. However, due to their underlying assumptions, they are often limited in terms of flexibility to different geometries (particle shapes as well as overall domain geometry) and complicating physical phenomena (e.g. complex solvent rheology, drying-see Table. 1).
In parallel, an increasing number of lattice-based (e.g. lattice gas automata and Lattice-Boltzmann; various gridbased Navier-Stokes solvers) as well as off-lattice particlebased treatments of the solvent have emerged. Some of the significant developments aimed at colloidal suspensions came in the form of Lattice-Boltzmann algorithms which include thermal fluctuations [130] and the development of coarse-grained particle-based solvents methods like dissipative particle dynamics (DPD) [95] and smoothed particle hydrodynamics (SPH) [82,140]. More recently, a hybrid lattice-off-lattice algorithm known as stochastic rotation dynamics (SRD) [144] or Multi-Particle Collision Dynamics (MPCD) [92,164] has gained traction in the soft matter simulation community. The SRD algorithm was derived from techniques developed for rarefied gas flow simulations, e.g. direct simulation Monte Carlo (DSMC) [18]. The DPD and MPCD methods are discussed in greater detail in Sect. 3, following a description of the key physics involved in modeling colloidal suspensions.
Governing physical principles
We describe a suspension of colloidal particles as a collection of N interacting, moving particles in carrier liquid, or "solvent". In three dimensions, each colloidal particle i has in general three translational degrees of freedom r i and three orientational degrees of freedom ω i . We group these into the 6N -dimensional vector r = {r 1 , r 2 , . . . , r N , ω 1 , ω 2 , . . . , ω N }. Similarly, the translational and angular velocities of all particles are lumped into the vector U, the forces and torques into the vector F, and the masses and moments of inertia into the matrix M.
In the general framework of a collection of N colloidal particles submerged in a bath of N s solvent particles, the equations of motion for the colloidal particles can be written as: Here, F C is the resultant (conservative) force acting on the particles due to interactions with other colloidal particles, and F S is the resultant force due to interactions with solvent particles. When the solvent is treated atomistically, the nature and form of the two interactions are often similar, and an additional N s equations similar to (1) are required; when the solvent particles are coarse-grained representations of actual solvent molecules, F S is given by a modified pairwise interaction or a suitable solvent-colloid collision rule. The choice of these rules and interactions depends on the method. We will return to the details of the various methods in subsequent sections.
An alternative approach to the simulation of additional solvent particles relies on a continuum treatment of the solvent. As a result, the solvent degrees of freedom can be discarded, but the forces experienced by the colloid particles must be modified to reproduce the key effects of the solvent. This typically assumes that the momentum relaxation timescale in the solvent is much smaller than the timescale of colloid diffusion (i.e. steady flow). The motion of a given colloidal particle induces a flow field in the solvent, which in turn affects its own motion (drag) as well as the motion of the remaining colloidal particles (hydrodynamic interactions). Collectively, these effects are lumped into the hydrodynamic forces, F H . In addition, colloidal particles experience a large number of collisions with solvent molecules, which give rise to Brownian motion. We denote the resulting forces as F B , the total Brownian force. The equations of motion for such a model are therefore given by: We refer to all such models of colloidal suspensions as implicit solvent models. While the term "implicit solvent" often refers to models that also account for the thermodynamic effects of solvents on conservative inter-particle interactions [187] (e.g. electrostatic screening, hydrophobic effects, etc.), we use the term to refer exclusively to the hydrodynamic and dissipative effects of the solvent. The aforementioned thermodynamic properties of the solvent are accounted for in the conservative interparticle force term F C . In both implicit and explicit solvent models, F C is a conservative force that depends only on particle positions (and orientations), while the hydrodynamic force F H in implicit solvent models depends additionally on particle velocities, and the Brownian force is constrained to satisfy the fluctuationdissipation theorem.
In the remainder of this section, we provide a discussion of the general physical principles underlying each of these types of forces. The details pertaining to the treatment and implementation of the solvent models are largely deferred to Sect. 3.
Hydrodynamic interactions
Hydrodynamic interactions and Brownian motion of colloidal particles arise naturally when using explicit representations of the solvent, so long as several relatively simple conditions are met. First, the solvent dynamics should conserve mass, linear and angular momentum and energy on the time and length scales relevant to the motion of colloidal particles. When a constant temperature must be maintained in the solvent, the thermostatting algorithm must be carefully designed so as to minimize interference with the flow characteristics. Second, the coupling between colloidal particles and solvent must be done in a manner consistent with particle-scale flow characteristics; in the canonical case of a hard-sphere colloidal suspension, this typically entails enforcing no-slip boundary conditions at the particle surface. Several other features are additionally desirable for explicit solvents, such as Galilean invariance, incompressibility, realistic solvent rheological and thermodynamic properties and realistic Schmidt number (ratio of momentum to mass diffusivity). In many cases, the solvent coarse-graining process compromises on several of these features in order to keep the methods computationally expedient (cf. Sect. 3).
Implicit solvent methods augment the particle forces with hydrodynamic interactions and fluctuating Brownian forces, as shown in Eq. (2). The most common starting point for implicit solvent models is a continuum treatment via the Navier-Stokes equations. Due to the small size of colloidal particles, inertial effects are insignificant (low Reynolds number), which leads to hydrodynamics well-described by the Stokes equations: where v is the fluid velocity, p is the pressure and μ is the fluid dynamic viscosity. We note that in writing Eq. (3), we have also assumed a Newtonian solvent. A Green's function (equivalent to the Oseen tensor) can be obtained for the Stokes equations, and a boundary integral equation representation can be used to reduce the three-dimensional system of Eq. (3) to a two-dimensional integral equation system [116]. This is the basis of the Boundary Element Method (BEM), which can be used to solve the flow field for arbitrary particle geometries, and as such represents a highly general approach to the treatment of Stokes hydrodynamics [116]. Unfortunately, the BEM is computationally too expensive for the time scales of interest in the present work, so we do not pursue it here. Several approaches have been devised to approximate the hydrodynamic interactions for systems that exhibit certain symmetries in particle shapes and relatively simple computational domains. A key simplifying feature of the Stokes equations is their linearity, which makes the generalized hydrodynamic force/torque vector F H a linear function of the particle translational/angular velocities U: Here, R is known as the hydrodynamic resistance tensor, which in the current formulation is a 6N -dimensional matrix that describes the effects of the velocity of any particle on the hydrodynamic force experienced by any other particle in the system. Additionally, we have implicitly assumed that no bulk flow fields are imposed. The Langevin equation for the colloid particles then becomes: Both hydrodynamic and Brownian forces originate from interactions between colloidal and solvent particles, so it is not surprising that they are closely related. This relationship can be derived based on the well-known fluctuationdissipation theorem, and yields the following requirement for the properties of the 6-N -dimensional Brownian force vector F B : Here, angular brackets denote an ensemble average, t and τ represent time, k B and T are Boltzmann's constant and the system temperature, respectively, and δ D is the Dirac delta function. Clearly, the many-body resistance tensor R is central for both hydrodynamic and Brownian forces. Much of the research effort in developing implicit solvent methods has been devoted to finding accurate and computationally efficient approximation schemes to this resistance tensor. The work of Ermak and McCammon [65] set the groundwork for many subsequent implicit solvent methods. The basic premise of their algorithm is to integrate Eq. (5) twice in time and determine the particle displacements over a time step Δt much larger than the inertial time scale of the colloid particles τ C = m/6πμa, where μ is the solvent viscosity, m is the particle mass and a is the particle radius. The interested reader is referred to the original work for details [65]. We note that this approach effectively ignores the inertial time scale of the colloid particles, which is not always a good assumption. In the current notation, the resulting expression for the evolution of particle positions r is given by: The inverse of the resistance tensor R −1 is known as the mobility tensor, M. The constraint from fluctuationdissipation theorem on the magnitude of the Brownian force F B in this formulation is: The simple forward time-stepping scheme in Eq. (7) can be problematic due to difficulties associated with computing the divergence of the mobility tensor (∇ · R −1 ). Several tractable alternatives exist, including the use of a midpoint time-stepping scheme [71,85]. The entries in the mobility tensor (or equivalently, the resistance tensor) can be derived from various analytical treatments of the Stokes equations (3). In the simplest case inter-particle hydrodynamic interactions are ignored, leaving only isotropic drag terms corresponding to the dilute limit. This yields resistance tensor entries R i j = 6πμRδ i j for all force-translational velocity couplings, and R i j = 8πμR 3 δ i j for all torqueangular velocity pairs. The accuracy of this simple treatment, often termed Brownian dynamics (BD), is expected to deteriorate when colloid particles are in close proximity. However, its convenient diagonal form is computationally expedient owing to the simple inversion of the resistance tensor. More sophisticated mobility tensors have been derived based on a multipole expansion treatment of the Stokes equations. In the original formulation of Ermak and McCammon, the Oseen tensor [90,235] as well as the Rotne-Prager tensor [186] were implemented. The resulting algorithms are also classified as Brownian dynamics, but modified based on various approximations for the hydrodynamic tensor. Such methods have met with some success and can often reproduce qualitative trends in suspension diffusion and rheological properties, but perform poorly for dense suspensions or when particles are in close contact. In near-contact regimes, lubrication forces can only be accounted for with high-order terms in the moment expansion, which is impractical. As it stands, the computational cost of these methods is fairly significant, due to the calculation of the resistance tensor, which scales as N 2 , and the inversion of the mobility matrix required for the Brownian displacement terms, which scales as N 3 [7,57].
The more rigorous Stokesian dynamics (SD) technique [25][26][27]59,75] has addressed many of the shortcomings in the accuracy of these techniques, albeit at even higher computational cost. The discussion so far has centered on a quiescent suspension of monodisperse spherical particles in an infinite medium. In its original form, the SD method efficiently accounts for near-field lubrication terms, considers the possibility of an imposed linear shear flow, and improves the far-field hydrodynamics with the inclusion of stresslet (S H ) to rate-of-strain (E ∞ ) coupling. For more details, the interested reader is referred to the original literature [27,59,75]. In Sect. 5, we discuss the application of SD to non-spherical particles and general geometries.
The Stokesian dynamics technique has proved extremely powerful, and remains one of the most accurate methods available for simulating multi-body hydrodynamics in the context of colloidal suspensions. However, due to its high computational cost it is currently extremely difficult to reach the time scales of interest for long-time diffusion. While the recently-developed accelerated Stokesian dynamics (ASD) techniques [8,200] have extended the timescales accessible to SD, the algorithm still scales poorly on parallel architectures. Variants and further improvements to ASD are available [48,72,107,125,153,200,225]. Additionally, several simpler pair-drag models [7] that are based primarily on pairwise lubrication terms have emerged as potential alternatives, particularly for high colloid volume fractions. The most recent variant of ASD, now commonly known as fast lubrication dynamics (FLD) [31,124,126], represents a significant simplification of SD and has shown considerable potential for use in predictive simulations of realistic systems. This technique is very closely related to an approximate version of ASD introduced by Banchio and Brady [8], which they dubbed ASDB-near-field, or ASDB-nf. We adopt the FLD nomenclature simply because more details are available on its implementation and performance [31,124,126] and it has been tested more extensively for the types of systems and the time scales that we are interested in here [126,197].
The FLD technique is based on splitting the resistance tensor R into an isotropic part that accounts for far-field multi-body effects, R 0 and a part that accounts for shortrange pairwise lubrication effects, R δ : The key simplifying assumption in FLD is choosing R 0 to be a diagonal tensor: The scalars f 0 FU , f 0 T and f 0 SE are functions only of the volume fraction of colloid particles φ, and are empirically fitted to match the short-time diffusivity and viscosity obtained from SD simulations. The details of the fitting procedure as well as other aspects of the method are described in greater detail in works by Kumar, Bybee and Higdon [31,124,126].
The lubrication component of the resistance tensor R δ is computed using only pairwise frame-invariant interactions [7] based on lubrication theory solutions of two-particle interactions [106,117], similar to what is done for the nearfield component of the SD algorithm. This gives rise to interactions that include terms of order δ as well as δ log(δ), where δ is the inverse of the distance between surfaces at the point of nearest approach for a pair of particles (δ = 1/(r i j −a i −a j ); a i and a j are the particle radii, and r i j is the distance between their centres). Following previous work [31,124,197], we have tested the algorithm both with and without the log(δ) terms.
The FLD method thus combines the pairwise near-field lubrication interactions that are dominant at high volume fractions and small inter-particle separations with a reasonable approximation for the far-field resistance tensor, which dominates at large particle separations. It inherits the flexibility of SD to account for imposed linear shear flows, but also its limitations with regard to more complex boundaries, flow conditions and solvent rheological properties. Although the discussion thus far has focused on monodisperse spherical particles, the extension to polydispersity is relatively straightforward, with pairwise lubrication expressions given by Kim and Karrila [116]. Extensions of SD and FLD to nonspherical particles will be discussed in Sect. 5, and more details pertaining to the implementation of FLD in this work are deferred to Sect. 3.
Conservative inter-particle interactions
The conservative force F c in Eqs. (1) and (2) is only a function of particle positions, and includes inter-particle forces as well as any effects from external forces. The physical basis of conservative inter-particle interactions is typically a very short-range repulsion due to steric effects (i.e. volume exclusion/particle collision), a short-range attractive force that has its physical origins in dispersion interactions, and a long-range screened electrostatic force, which may be either attractive or repulsive. For multicomponent solvents, attractive depletion forces due to the presence of additional solutes can also be significant. Typically, surface chemistries of colloidal particles and solvent properties (pH, ionic strength) are modulated in order to balance the attractive van der Waals force with a repulsive electrostatic force and promote particle dispersion. Conservative inter-particle forces can often be approximated as a sum of pairwise two-body interactions, each of which depend only on the separation r i j between particles i and j (as well as relative orientation for non-spherical particles). The total conservative force can thus be written as a sum of pairwise inter-particle forces and any external forces: The assumption of pairwise interactions ignores the possibility of multi-body effects, which can be problematic in some cases. For instance, colloidal particles coated with relatively large surface ligands (e.g. polymers) can interact in ways that alter the coating structure, which in turn affects their interactions with additional colloids. The discrete element framework can in principle be extended to include multi-body effects if their functional form can be determined, but the added complexity and expense are typically not justified.
The form of pairwise inter-particle forces F c i j (r i j ) is highly dependent on the details of the colloid particle material, colloid surface topology and chemistry, solvent properties and thermodynamic state variables. Typically, the potential energy U i j (r i j ) associated with the interaction of two particles is described, and the force is then trivially obtained from: Determining the functional form of U i j (r i j ) for two particles in a solvent requires an averaging (or coarse-graining) of all degrees of freedom other than the inter-particle separation. This may include molecular details of the solvent (e.g. counterion distribution, hydration layers), the colloid surface (e.g. passivating ligands/polymers, any condensed counterions) and the colloid interior particle structure. In a rigorous statistical mechanical framework, U i j (r i j ) is the potential of mean force between colloidal particles. Indeed, fully atomistic simulations of two colloidal particles can be used to compute the potential of mean force for a given system, which can then be tabulated and used in a larger scale simulation of a many-particle colloidal suspension [132]. This approach is computationally expensive, often suffers from statistical convergence issues, is only practical for systems with monodisperse, homogeneous particles and is only as accurate as the underlying atomistic force field. However, with improved force fields and increasingly powerful computing resources, it has the potential to become a key component in a multiscale modeling framework for colloidal suspensions.
The most common approaches to evaluating U i j (r i j ) rely on approximate analytical theories that describe the dominant physical processes responsible for inter-particle interactions. The classic DLVO theory treats screened electrostatic/electrical double layer interactions in the context of a linearized mean-field approximation for the counterion distribution in the solvent, and dispersion interactions in terms of summations of empirical forms of atomic dispersion potentials. A detailed derivation is not given here for either case, since these are available in many other works [49,63,68,104,224]. Several forms of the electrostatic interaction can be found in literature, and all contain a term that decays exponentially with inter-particle separation. An excellent discussion of these electrostatic potentials is given by Elimelech et al. [63]. In previous work [197], we have used the following form for the electrical double layer interaction potential for two particles with radius a separated by a center-to-center distance r : ρ ∞ is the bulk electrolyte concentration (assumed here to be a 1:1 salt, but other cases can easily be incorporated), κ is the inverse Debye screening length and ψ 0 is the electric potential at the particle surface, which can be related to experimentally measured zeta potentials. The interested reader is referred to the work of Schunk et al. [197], which includes a more detailed description of this potential and a discussion of how parameters can be selected to match experimental conditions. The colloid dispersion interaction components of the DLVO model are treated by summing the pairwise dispersion interactions between the constituent atoms/molecules in the particles. Once again, a variety of physical assumptions and approximation schemes have been used to carry out this summation [63]. Most formulations of DLVO theory treat only the attractive dispersion terms, which scale as r −6 . In the original formulation of Hamaker [89], the resulting attractive potential for two spheres of radius a separated by a center-to-center distance r yields the following for the attractive potential: Here and in previous work [197], we use the form derived by Everaers and Ejtehadi [68], which treats colloidal particles as collections of Lennard-Jones (LJ) particles, i.e. particles that interact according to: The parameters and σ are the usual LJ parameters representing the depth of the potential well and the characteristic dimensions of the LJ particles, respectively. The resulting integrated colloid-colloid potential U cc , which includes the r −12 repulsive component of the LJ potential, has the following form: The overall potential that we advocate for DLVO-type interactions is then the sum of the electrostatic component, Eq. (13), and the attractive and repulsive dispersion interactions, Eq. (16). These are implemented in the LAMMPS software package [176] as 'pair_style yukawa/colloid' and 'pair_style colloid', respectively.
While the qualitative features of DLVO theory have provided invaluable insights into the behavior of colloidal suspensions, its approximations can often be problematic. Real systems typically consist of particles that are heterogeneous in their composition, structure and surface characteristics. As such, even simple parameters like the effective particle radius a are not straightforward to assign for the purposes of calibrating the potential. Based on previous work [197], we have found the Hamaker constant A cc to be particularly difficult to estimate. Typical values based on the derivation discussed above [180] can vary by an order of magnitude depending on some of the underlying assumptions. Typically, several key parameters can be assigned based on experimental measurements of directly related properties, such as the zeta-potential and particle diameter; however, the Hamaker constant is often treated as a fitting parameter, which requires some level of calibration based on experimentally measured macroscopic properties. For the idealized systems in the present work, colloid particles are assumed to be made up of small Lennard-Jones particles with parameters σ and . The Hamaker constant can then be calculated analytically as A cc = 4π 2 ρ 1 ρ 2 σ 6 , where ρ 1 and ρ 2 are the number denisities of constituent LJ particles in colloid particles 1 and 2.
Most analytical treatments of dispersion potentials, including Eq. (16) lead to expressions that diverge at particle contact or near-contact. For dense suspensions under high shear or compressive forces, DLVO-like potentials must be replaced by short-range interactions for small particle separations. This is typically done using potentials derived in the context of granular flow simulations [33,105,202]. Various forms of such granular potentials have been proposed depending on various assumptions relating to the particle contact mechanics [51]. For most colloidal simulations in a liquid solvent, friction effects can be ignored, and simple frictionless Hookean or Hertzian granular potentials [45] are adequate. More complex potentials that also include friction effects [201] and inter-particle adhesion surface forces [110,145] can also be included.
Attempts have also been made at deriving closed-form potentials that include effects of grafted polymer ligands. For instance, Vincent et al. [226] derived an interaction potential between particles with grafted polymer chains in a solvent containing additional polymer molecules, which includes parameters such as the polymer χ -parameter, polymer adsorbed densities, molar volumes, etc. Despite the added complexity, such approaches can be useful in simulations for predicting general trends as a function of changes in various system parameters. However, the additional parameters can be difficult to evaluate, making quantitative predictions for such systems even more challenging.
Methods
As previously stated, the goal of this work is to directly compare several techniques for the simulation of colloidal suspensions. To make the comparison as straightforward as possible, we have selected the canonical system of hard-sphere monodisperse particles immersed in a Newtonian solvent.
In this section, we provide methodological details specific to the simulations carried out in this work. We therefore discuss our implementation of the MPCD and DPD methods, as well additional details pertaining to the FLD method described in Sect. 2.2.1. All of the simulations have been carried out in the LAMMPS software package [176], which is freely distributed as open source code (http://lammps.sandia.gov). We make reference to various modules within LAMMPS (known as 'styles') for the reader interested in applying some of these methods. The basic molecular dynamics algorithm and its parallel implementation in LAMMPS are not discussed here; it is covered extensively in other works [2,4,176]. However, the last portion of this section provides more details of algorithmic considerations specific to colloidal suspensions.
An important distinction in this work compared to previous similar studies is that we attempt to match quantitatively the physical properties of a particular solvent, rather than the usual approach of using convenient values for various parameters in the solvent models, and reporting all results in terms of dimensionless quantities [11,22,36,169,181,217]. For all simulations presented here, we select the solvent properties to match those of a Lennard-Jones fluid of density 0.66 σ −3 at temperature kT = and pressure P = 0 σ 3 / with a cutoff distance of r c = 3.0σ and particle mass m L J . In previous work, it was shown that this results in a dynamic viscosity of μ = 1.01 ± 0.03σ 2 /τ [172], where τ is the Lennard-Jones time unit (τ = m L J σ 2 / ). Note that we are not simulating Lennard-Jones particles at any point in this work, only using these solvent properties and Lennard-Jones units for convenience. The well-characterized Lennard-Jones system shares many of the same fundamental characteristics as real fluids, and the mapping procedure is in principle equivalent. This choice of solvent in combination with the hard-sphere-like interaction of colloid particles allows for a straightforward comparison of the different hydrodynamic treatments. The interested reader is referred to previous work [197] for discussions of the additional complications that arise from mapping to the properties of a more realistic system (polystyrene particles in water).
FLD implementation
The physical basis and mathematical treatment of hydrodynamic interactions and Brownian forces underpinning implicit solvent methods such as SD and FLD were discussed in Sect. 2.2.1. When implementing these methods in a molecular dynamics framework, the time evolution of colloid particles can be carried out without inertial terms, as was largely assumed throughout the development in Sect. 2.2.1. Alternatively, the inertial terms in Eq. (5) can be retained, and the equations of motion can be explicitly integrated in the usual manner. In previous work [197], we referred to these two schemes as implicit and explicit integration, respectively; here, we refer to them as 'non-inertial' (e.g. non-intertial FLD, or nFLD for short) and 'inertial' (iFLD), respectively. This is to avoid confusion with the terms implicit and explicit as they are applied to the solvent. As the name suggests, inertial schemes can resolve timescales shorter than the inertial timescale of the colloid, τ C = m/6πμa, but naturally require a time step significantly smaller than this. On the other hand, non-inertial methods require a time step significantly larger than τ C . The time step size is otherwise limited only by system-specific numerical considerations (e.g. conservative forces). We also note that both schemes assume that the underlying momentum relaxation in the solvent is instantaneous (i.e., the underlying hydrodynamic expressions are steady flow; see Eq. (3)).
We also present results for Brownian dynamics simulations (BD), which are implemented as a straightforward simplification of FLD. In particular, if all pariwise lubrication terms R δ in Eq. (9) are discarded, and volume fraction corrections to the isotropic terms are not carried out (i.e. f 0 FU and f 0 T are set to unity in Eq. (10)), the resulting simulation corresponds to Brownian dynamics (BD). The inertial version of BD (iBD) is then equivalent to Langevin dynamics (LD). All of these simplifications can be carried out in LAMMPS using various settings to the 'pair lubricate' styles.
The implementation of the FLD and BD schemes in LAMMPS follows the works of Kumar, Bybee and Higdon [31,124,126]. The discussion of Sect. 2.2.1 largely assumed quiescent flow conditions for simplicity, to which we now add the possibility of an imposed shear flow. In this case, the generalized velocities of colloid particles U are expressed relative to the velocity/angular velocity of the bulk fluid evaluated at the center of the particles, U ∞ . Also, the stresslet (S H )/rate-of-strain(E ∞ ) coupling introduced in the original Stokesian dynamics formulation is retained. The rate of strain tensor is a simple function of the shear rateγ and the flow geometry [126]. The hydrodynamic force and stresslet are given by the following linear relationship: We discussed the origins of the resistance tensor R in Sect. 2.2.1. The additional Brownian force is calculated based on Eq. (8), where the resistance tensor does not include the R F E component. This requires taking the square root of the force-velocity resistance tensor (R FU ), which is trivial for the isotropic, diagonal portion R 0 (see Eq. (10)), and can be carried out in a pairwise fashion for the lubrication component R δ [7]. The interested reader is referred to the relevant literature for additional details [7,31,124,126]. In the inertial version of FLD (iFLD), the Brownian force, hydrodynamic force and conservative inter-particle forces are summed, and the particle positions are updated using standard molecular dynamics schemes. In LAMMPS this is accomplished using "hybrid" pairwise interactions that include pair styles 'lubricate', 'brownian' and additional conservative pair styles (e.g. 'colloid' or 'yukawa/colloid'), in conjunction with a standard extended-body integrator (for translation and rotation) such as 'fix nve/sphere'.
For nFLD, the forces sum to zero, which leads to the following matrix problem for the particle velocities: Since the resistance tensor R FU is symmetric and positive definite, a conjugate-gradient algorithm can be used to solve for the particle velocities. The particle positions are then updated accordingly. In LAMMPS, this is accomplished using a similar hybrid pair style scheme, but with the use of pair style 'lubricateU' instead of 'lubricate', in conjunction with 'fix nve/noforce' to update particle positions.
The rate-of-strain is imposed for both iFLD and nFLD and the resulting stress which includes the solvent component can be measured. Once the particle velocities are known, the stresslet S H and the total stress in the solution τ can be computed readily in a post-processing step for a given configuration of particles (see Eqs. (13) and (14) in the work of Kumar and Higdon [126]). The viscosity μ r of the suspension relative to the viscosity of the solvent μ 0 can be computed as a function of the appropriate measured stress component τ xy and the imposed shear rateγ :
MPCD implementation
The MPCD method and its implementation in LAMMPS have been detailed elsewhere for a pure fluid [172] as well as for forced flow in the presence of walls [24]. A summary is given here for purposes of comparison, with additional details relevant to colloidal suspensions given in Sect. 3.5. In MPCD, the solvent is represented as point particles with no pairwise particle/particle or long-range interactions. In order to propagate momentum through the fluid, MPCD particles are first streamed at a constant velocity, and their positions x i are updated accordingly: At prescribed time intervals, particles are grouped into evenly-spaced cubic regions (bins), and various schemes are used to swap momentum among particles in order to simulate solvent collisions. In the original MPCD (or SRD) scheme proposed by Malevanets and Kapral [144], the components of the velocities of particles relative to the centre of mass velocity of all particles in a given bin ξ are rotated around a randomly selected orthogonal direction: Here, R s is a stochastic rotation matrix and u ξ is the center of mass velocity of all N ξ particles located in bin ξ : In our implementation, we select randomly one of six directions corresponding to positive and negative orthogonal axes, and always rotate by 90 • . Alternative rotation schemes have been proposed, and the rotation angle can be modulated to control the properties of the solvent [3,219]. It can be easily verified that this collision scheme conserves linear momentum and energy; as a result, the correct hydrodynamics are reproduced [101,144]. We refer to simulations based on this collision scheme as SRD.
An alternative collisional scheme known as multi-particle collision/Andersen thermostat (MPC-AT) [164,165] that does not entail rotation has also gained traction, due its inherent ability to thermostat the fluid. In this scheme, particle velocities are updated according to: Here, v i,rand represent velocities drawn randomly out of a Maxwell-Boltzmann distribution, and u ξ is the mean velocity for a particular bin, defined as in Eq. (22). A similar scheme can be used to conserve angular momentum (MPC-AT+a) [92,164], but we do not include it here as we are interested primarily in translational aspects of colloidal motion. In both SRD [92,219] and MPC-AT [164] collision schemes, the viscosity and diffusion coefficients of the pure fluid can be calculated analytically. The kinematic viscosities ν for the two schemes are given by: Here, M is the mean number of MPCD particles per bin, Δt is the time between collisions, Δx is the bin size and ρ is the mass density of the fluid. Values of ν and ρ are chosen to match the desired physical properties of the fluid, while (24) and (25) yield two distinct solutions for the collision time step Δt [24]. The small time step solution corresponds to a short mean free path λ relative to the bin size, which requires random shifting of collision bins to remove inter-particle correlations and restore Gallilean invariance [100,101]. In contrast, the larger time step solution leads to a larger mean free path, and bin shifting is not required. However, the short mean free path solution also corresponds to higher, more realistic Schmidt Here we present results for several sets of parameters for both methods. In all cases we attempt to match the properties of the same Lennard-Jones solvent discussed earlier, following previous work [24]. The resulting parameters for both collision schemes are summarized in Table 2. The Schmidt number in Lennard-Jones fluids is typically of order 10-100 [152], comparable to values obtained using the small collision time step, but significantly larger than that of the large time steps. More realistic fluids have Schmidt numbers of order 10 3 , which is difficult to attain with the MPCD method. Several schemes have been proposed for coupling MPCD fluids to colloid particles. The simplest of these includes colloid particles in the multi-particle collision scheme as if they were another MPCD particle (located at the colloid center) [11,109], with the option of weighting the colloid particle influence by the colloid mass. This is referred to as 'collisional coupling' by Batôt et al. [11]. While this scheme yields qualitatively reasonable results, it does not resolve the particle surface in any way, and cannot be expected to compare quantitatively to the other methods tested here; the hydrodynamic radius of the colloid particles in this approach is a function of the MPCD parameters, rather than being set to the desired value [109]. A more sophisticated approach relies on introducing a pairwise potential between MPCD particles and colloid particles, typically with a short repulsive character [11]. However, this introduces additional parameters that must be carefully selected for a given set of MPCD fluid parameters [11], and it is not clear that no-slip boundaries can easily be enforced in this manner. Finally, the approach that we adopt here involves detecting collisions between MPCD particles and colloid particles that take place during the MPCD streaming step (Eq. 20), re-directing the MPCD particles, and adjusting the forces/torques on the colloids appropriately. These collisions are not to be confused with the bin-wise multi-particle collisions, in which only MPCD particles participate (Eqs. 21 and 23). The physical basis of MPCD particle-colloid collisions is not easily interpreted, since MPCD particles do not correspond directly to solvent molecules or fluid elements. One of the key requirements of such collisions is that they enforce no-slip boundary conditions at the colloid particle surface, i.e. the average tangential component of the MPCD fluid velocity must vanish at the colloid surface. We have tested two such collision schemes.
The first collision scheme, which we refer to as stochastic boundary conditions, is based on the work of Inoue et al. [103]. Briefly, an MPCD particle that collides with a colloidal particle is returned to the point of collision, after which it is assigned a random velocity with a component along the outward normal to the colloid particle surface, and two components tangential to this. The normal and tangential components (v n and v t , respectively) are drawn out of the following distributions: Here, β = m f /(2k B T ), with m f the mass of the MPCD fluid particles (taken here to be unity). The colloid particle surface velocity at the point of collision (Eq. 29) is then added to the randomly assigned MPCD particle velocity, and the MPCD particle is propagated with the new velocity for the remainder of the streaming time step. Forces are applied to the colloid at the collision point based on the momentum change of the MPCD particle. In previous work [24], we showed that this boundary condition can be problematic for situations with forced flow, but we expect it to be adequate for quiescent conditions. In the second collision scheme ('reverse'), the MPCD particle is returned to the point of collision, and the magnitude of each of its outgoing velocity components is calculated assuming a momentum-conserving (perfectly elastic) collision with the point on the colloid surface. In the limit of the colloid mass being much larger than the MPCD particle mass, this yields: Here, v f 0 and v f 1 are the velocities of the MPCD particle before and after the collision, respectively, and v 0 cs is the velocity at the colloid surface before the collision: v 0 c and c are the translational and angular velocities of the colloid particle prior to collision, and r s and r c are the locations of the surface collision point and the colloid particle centre, respectively.
In order to avoid artificially low local viscosities in MPCD bins that are partially occupied by colloid particles [131,232], we use the "virtual particle" method first suggested by Lamura et al. [131]. Although several versions have been proposed [24,232], we only implement one here, which has been shown to be adequate for forced flow between parallel walls [24]. In this scheme, "virtual" MPCD particles (VPs) are added at random locations to the interior of colloid particles where MPCD bins partially overlap colloids. Next, the velocities of the VPs are randomly assigned out of a Maxwell-Boltzmann distribution, and augmented by the velocity of the interior point in the colloid corresponding to their location. VPs then participate in multi-particle collisions with actual MPCD particles, and resulting changes in their momenta are transferred as forces/torques to the colloids. Note that VPs only participate in the multi-particle collisions, and new VPs are generated for each such event. The total number of VPs assigned to each colloid particle is selected to yield the same overall density of VPs as actual MPCD particles in the bulk fluid. Additional details are provided in our previous work [24], where this method was referred to as "VP dens " and "VP multi " .
DPD implementation
The DPD method introduced by Hoogerbrugge and Koelman [95] treats a fluid using a collection of particles with 'soft' interactions, allowing for a much larger time step than atomistic MD. The characteristic dimension of DPD particles (i.e. the cutoff of the DPD interparticle interactions) is much larger than the molecular dimension of fluid molecules, but should be smaller than the relevant flow characteristic size (in the case of suspensions, the colloid particle diameter). Additionally, thermostatting in a DPD fluid is carried out using a pairwise scheme that preserves hydrodynamics. The physical basis of the DPD method can be justified if DPD particles are thought of as 'packets' of fluid, but as with MPCD, rigorous mapping to the properties of a given fluid and resolving geometric features comparable to the size of DPD particles is problematic [175].
From a molecular dynamics perspective, DPD is a traditional pairwise interaction model with a short-range cutoff and can be implemented similar to a Lennard-Jones potential. The implementation in LAMMPS follows that of Groot and Warren [87], where the force on particle i due to particle j separated by a distance r < r c is given as: is the relative velocity of the two particles, α is a Gaussian random number with zero mean and unit variance, Δt is the timestep size, and w(r ) is a weighting factor that varies between 0 and 1.
Like all explicit solvent models, mapping the physical properties of a real fluid to a DPD model is challenging. In particular, the lack of analytical expressions for various dynamic properties such as viscosity complicates matters (i.e. equations analogous to (24) and (25) for the MPCD method). Approximate expressions have been proposed [87], which provide a useful starting point for the mapping procedure we carry out herein. As previously discussed, we aim to reproduce the viscosity of the carrier fluid, rather than simply the key dimensionless parameters for a given flow situation. The combination of DPD parameters that gives the desired viscosity of 1.01m L J /σ τ is not unique. We therefore select a value of A = 25 /σ , which, following previous work [170,181,197], results in a realistic solvent compressibility; the DPD interaction cutoff r c is chosen to be somewhat smaller than the colloid particle diameter [181], r c = 3.0σ ; and the DPD particle number density is set to 3.0r −3 c , or 1/9σ −3 based on computational considerations. The DPD particle mass is set to 5.94m L J to match the target mass density of 0.66m L J /σ 3 . Finally, the parameter γ is set to 86 τ/σ 2 , which yields a viscosity of ∼ 1.01m L J /σ τ , as measured using a trial-and-error approach and the Müller-Plathe viscosity calculation method (see below). The nonlinear dependence of the viscosity on the damping parameter γ is well-known for the DPD method [114,218]. For this set of parameters, the system pressure is ∼ 3 /σ 3 , and the Schmidt number is ∼ 64.
Several approaches have been proposed for coupling a DPD fluid to colloidal particles. In early works based on the DPD method, suspended objects were simulated by constraining clusters of DPD particles to move as rigid bodies [21][22][23]118]. While this approach is flexible with regard to representing various object shapes (see also Sect. 5.2), it leads to significant solvent penetration into colloid par- ticles due to the soft DPD interaction, and in the case of spherical colloids, it entails unnecessary computational expense. Instead, we model colloids as single spherical particles, and introduce an additional, relatively stiff conservative interaction between colloids and DPD particles. Similar approaches have been proposed by others and shown to be more conducive to producing correct hydrodynamic interactions [36,60,170]. Given the non-physical nature of DPD particles, the choice of interaction potential is somewhat arbitrary, and serves only to prevent solvent penetration into the colloid and satisfy the required boundary conditions. As such, we use a simple shifted Lennard-Jones potential: Here, C S and σ C S are Lennard-Jones parameters specific to the colloid-DPD interaction, r is the distance between the colloid and DPD particle centers, and δ C S is a parameter that effectively shifts the potential so that it diverges at a non-zero particle separation. As these parameters are not straightforward to select, we have tested several combinations to achieve the desired diffusion characteristics. These parameters were selected to approximately match the short-time diffusivity of colloidal suspensions at a volume fraction of 0.1 based on iFLD simulations; short DPD simulations were therefore carried out with several parameter sets to ensure that reasonable agreement was attained. We have tested three combinations of parameters for DPD-colloid coupling, which we denote as DPD-v1, DPD-v2 and DPD-v3. The values of the relevant parameters are summarized in Table 3.
Additional modifications of the DPD method have been proposed in literature that have resulted in improved accuracy. The centro-symmetric colloid-solvent interaction potential in Eq. (31) can only transfer linear momentum to colloid particles, and all rotational aspects of the flow are lost. To address this, Espanol [67] has proposed a generalization of the DPD method known as the fluid particle model (FPM), which adds non-central shear components to the dissipative forces. Several variations of this method have been successfully applied to simulations of colloidal suspensions [170,175,181]. In order to achieve higher Schmidt numbers, Fan et al. [69] have suggested a modified form of the DPD conservative interaction, w(r ) = (1 − r/r c ) s , where non-unity values of the exponent s have been shown to yield higher Schmidt numbers. For high-viscosity fluids, the use of a Lowe thermostat [139] to replace the DPD random force has been advocated by several workers [35,36]. For additional discussion, the interested reader is referred to the excellent review by Pivkin et al. [175].
Colloid particles and simulation details
In all of the methods discussed here, the colloidal particles are treated as finite-sized particles with translational as well as orientational degrees of freedom (LAMMPS atom style 'sphere'). Parameters for the hard-sphere-like colloids are similar to those in previous work [197], with the particle radius set to a = 5σ . Inter-colloid interactions are captured using the integrated Lennard-Jones potential (Eq. (16); pair style 'colloid' in LAMMPS) with a cutoff distance of R c = 2a+30 −1/6 σ and a Hamaker constant of A cc = 4π 2 ∼ 39.478 . Note that this interaction is not infinitely hard, but is slightly softened to prevent colloid overlaps and allow for standard molecular dynamics time integration. Physically, this corresponds to adding a short surfactant coating to the colloid particles to avoid flocculation. Due to its short range and relatively stiff nature, the differences from a true hardsphere interaction are expected to be minor. Although lubrication forces prevent the close approach of the colloids, we also include the inter-particle potential with FLD, in order to maintain consistency with the other methods. In all cases, the inner and outer cutoff distances for FLD lubrication terms are set to 2.0002a and 3a, respectively (i.e. FLD lubrication terms are active for 2.0002a < r i j < 3a).
Based on simple energy conservation considerations, we found that the hard-sphere potential described above limits the time step to a value of Δt C ∼ 0.04τ . This is smaller than the inertial timescale of colloid motion (τ C = m/6πμa ∼ 3.63τ ), implying that nFLD cannot take full advantage of its large time-stepping ability; indeed, results for these methods for timescales not satisfying t >> τ C are physically suspect. In order to facilitate the comparison among various methods, we nonetheless retain the colloid potential and a small time step of δt C = 0.01τ C ∼ 0.0363τ for iFLD and δt C = 0.005τ C ∼ 0.0182τ for nFLD simulations, where colloid particle overlaps were occasionally problematic. We will return to this discussion when comparing computational performance of the different methods, and note that for other conservative inter-particle potentials, nFLD can be much more efficient. Similar values of the MD time step (∼ 0.03τ ) were used for MPCD, where the multi-particle collision time step (see Table 2) must be an exact multiple of the colloid/MD time step. For DPD, where no such constraints exist, we used a time step of 0.03τ for all cases.
The starting configurations for our simulations were generated so as to maintain consistency between the different methods. The MPCD method is the only one that imposes any constraint on the simulation domain size, which should be a multiple of the MPCD bin size (2.0σ ). As such, we set the simulation box size for all cases based on this criterion. Computational considerations for the explicit solvent methods (MPCD and DPD) at low volume fractions limit the system size to several hundred colloid particles, so we use a system size of 200 particles of radius a = 5σ for all methods and volume fractions. Based on these two constraints, we used cubic domains of side length 104, 80, 70 and 64σ , corresponding to volume fractions φ of 0.0931, 0.2045, 0.3053, and 0.3995, respectively; for simplicity, we refer to these as 0.1, 0.2, 0.3, and 0.4 volume fraction systems. The diffusion coefficient measured in simulations with periodic boundary conditions is known to have a strong dependence on system size [8,75,129]. We therefore use the corrections suggested by Ladd [129] for all volume fractions. Further details are provided in the Sect. 4.
The lowest volume fraction system (0.1) was constructed by random placement of colloidal particles in a cubic domain of size 104σ , ensuring that no particles overlap. This method is robust up to volume fractions of approximately 0.3, beyond which random placement of particles quickly exhausts all possibilities. As such, the 0.1 volume fraction system was slowly compressed in a simple Langevin dynamics (LD) simulation to the desired dimensions to generate the starting colloid particle configurations for the remaining volume fractions. In the case of MPCD simulations, additional solvent particles were added to achieve the desired number of MPCD particles per bin (M = 5.28), while ensuring that no MPCD particle started inside a colloid particle. For DPD simulations, simulation boxes of the same size as the desired colloid suspension systems were first constructed using only DPD particles with the requisite parameters (see Sect. 3.3), and the resulting pressures were measured. The colloidal particles were then added, and any DPD particles within a distance a of any colloidal particle center were removed. For each volume fraction, the pressure of the resulting colloid-DPD mixture resulting from this procedure was found to be in good agreement with the pressure for the pure DPD fluid; as such, the viscosity and other key parameters of the DPD fluid are consistent among volume fractions.
In all simulations, several hundred thousand equilibration steps were carried out prior to sampling for the purposes of computing diffusion coefficients or radial distribution functions. Viscosity for all FLD methods was determined using a non-equilbirium molecular dynamics (NEMD) approach, where the simulation box was shear-deformed at a constant rate (equivalent to Lees-Edwards boundary conditions; implemented in LAMMPS using the "fix deform" capability). The resulting stress was then measured as outlined near the end of Sect. 3.1, and viscosity values were computed for several shear rates. The Müller-Plathe velocity swapping algorithm [160] was used to compute viscosities for all explicit solvent methods. In this method, a momentum flux is imposed by random exchanges of particle velocities, and the resulting velocity profile (i.e. shear rate) is measured. The shear rate can be varied by controlling the frequency and number of particles undergoing velocity exchanges. For additional details, the interested reader is referred to the original work of Müller-Plathe [160] as well as the work of Petersen et al. [172]. The method is implemented in LAMMPS in the "fix viscosity" capability. For both MPCD and DPD, only solvent particles are included in the velocity exchanges; this minimizes interference with the colloid particle dynamics, and leads to a much faster convergence of the measured viscosities as compared to the NEMD method that must be used for FLD simulations. In both DPD and MPCD, simulations of the pure fluid were first carried out at the desired shear rates to ensure that no shear thinning of the solvent takes place in the shear range of interest.
Computational considerations
From a computational perspective, all the techniques described in this section (FLD, MPCD, DPD) share similar computational kernels: computing colloid/colloid interactions and (optionally) colloid/solvent or solvent/solvent interactions. This enables all the methods to be implemented in a discrete element (DEM) or molecular dynamics (MD) framework such as LAMMPS. Moreover, all the methods can exploit parallelism using the spatial-decomposition approach provided by LAMMPS (and many other MD codes) whereby the simulation box is partitioned across processors and processors compute forces on particles within their subdomain and communicate particle information to processors owning neighboring sub-domains [176]. Due to the shortrange nature of the computations, so long as there are sufficient particles per processor, the computational work outweighs the communication costs, and the parallel performance of a simulation can scale to large numbers of processors. Prototypical performance and scaling data are presented in Sect. 4.4.
Since FLD is an implicit solvent method, colloid/colloid interactions are the only pairwise particle computations to perform. As discussed above, the inertial FLD method is effectively an explicit time-stepping method, so the forces and torques on particles resulting from pairwise interactions are time-integrated in the usual manner. The non-inertial version of FLD is effectively an implicit time-stepping method, which requires solution of a linear matrix equation to obtain particle velocities at each time-step. This is a sparse matrix of order N = the number of particles, with non-zero elements for each particle/particle interaction within the cutoff distance. The matrix equation can thus be solved in a modest number of conjugate gradient iterations which invoke the same kind of local inter-processor communication needed to exchange particle information between neighboring processors in the three-dimensional spatial partitioning.
MPCD for hybrid colloidal/solvent systems has been implemented in the parallel MP2C code by Sutmann et al. [211], and has been used to model hydrodynamic effects on shearing polymers [97] and colloidal systems [205]. In LAMMPS, MPCD is implemented as a 'fix srd' style, which allows MPCD solvent particles to be added to a colloidal system. Colloid/solvent interactions are only computed when a streaming solvent particle collides with a colloid particle. As discussed above, the no-slip collision imparts force and torque to the colloid. The manner in which collisions are efficiently detected in LAMMPS is by binning the colloid particles onto a regular 3-D grid, which may but need not be the same resolution as the grid used for binning MPCD particles to perform multi-particle collisions. Since colloid particles have finite extent, each may overlap several such bins. After an MPCD particle streams to its final position, a loop over the small number of colloid particles in its bin is used to check for collisions.
MD codes such as LAMMPS use neighbor lists of nearby particles to efficiently enumerate particle/particle interactions [176], e.g. for colloid/colloid interactions in all the methods discussed here. MPCD particles do not interact directly with each other, and the number of solvent particles is orders of magnitude larger than the number of colloid particles in typical mixture models. For performance reasons it is important to be able to "skip" the solvent particles when building neighbors lists for colloid/colloid interactions or other operations such as time integration, which only involve colloid particles. LAMMPS does this by separating the two kinds of particles within the list of particles owned by each processor. Colloid particles are at the beginning of the list and can thus be looped over, when necessary, as if no solvent particles were present. Likewise, no MPCD particles need be communicated to neighboring processors to act as "ghost" particles when computing pairwise interactions.
As previously mentioned, DPD is straightforward to implement as a short-range pairwise interaction potential in an MD code. However, an additional complication arises when using small interacting particles as a background sol-vent for large colloidal particles. Such a model has multiple length scales, as evidenced by the disparate cutoffs for colloid/colloid, colloid/solvent, and solvent/solvent interactions. For example, if colloid particles are 20-times the size of solvent particles (diameter = σ ), then coarse-grained colloid/colloid potentials like that of Everaers and Ejtehadi [68] may be cut off at a distance ∼2.5× the colloid diameter = 50σ . Solvent/solvent interactions (like Lennard-Jones or DPD) are typically cut off at ∼2.5σ , and colloid/solvent interactions at some intermediate distance, ∼25σ . Standard parallel algorithms for building neighbor lists and communicating ghost atoms, designed for systems with a single cutoff length, can become very inefficient for such systems. To overcome this, LAMMPS has optional 'multi'-style neighborfinding and communication algorithms tailored for systems with multiple cutoff lengths. Benchmarks of prototypical solvated colloidal systems with size disparities up to 20× showed speed-ups of up to 100× in serial or parallel, using the algorithms presented by in't Veld et al. [222]. Although the disparities in particle sizes and cutoff distances are not as significant for the systems in the present work, these algorithms are nonetheless beneficial.
A final computational issue that can affect all of the methods, when running in parallel, is that of load-imbalance. Colloidal particles in dilute systems (low volume fraction) may aggregate, whether the solvent is implicit or explicit. This can lead to unequal numbers of particles per processor, if the simulation box is partitioned into equal-sized sub-domains. The problem can be exacerbated if explicit solvent is used, since both the large coarse-grained colloid and tiny solvent particles are represented as single particles. Thus the number of total particles per processor may vary widely. This typically becomes less of an issue as the colloid volume fraction rises, since the colloid density remains more spatially homogenous. LAMMPS has a 'balance' option to adjust processor sub-domain sizes to improve load balance, which can compensate for one-dimensional density variations, e.g. in an evaporation model with a liquid/vapor interface [38].
The use of GPUs for computational acceleration is becoming common place in molecular dynamics modeling. There is recent work by Wang et al. for GPU-enabled DPD models [228], and by Westphal et al. for GPU-enabled MPCD modeling [231] for pure MPCD fluids. We are not aware of GPU implementations of hybrid colloidal/MPCD models.
Diffusion coefficients
The equilibrium diffusion of colloid particles is fundamental to more complex phenomena of interest, such as shear viscosity and microstructure formation. If any simulation tech- nique is to be quantitatively predictive, it must first be able to predict diffusion properties accurately. As such, we present comparisons of the various methods discussed above, with a particular focus on diffusion. The details of each simulation method, the treatment of the colloidal interparticle interactions and the system construction have been discussed in the Sect.3.
In Fig. 1, we plot diffusion coefficients of hard-sphere colloid suspensions at various volume fractions as a function of time based on FLD simulations, using both nFLD and iFLD formulations discussed in Sect. 3.1. These plots were obtained by taking central finite difference numerical derivatives of the mean square displacement of colloid particles as a function of time: The angular brackets denote an average over particles as well as times τ 0 , i.e. using a standard moving time-origin analysis. Note that these values are not corrected for finite size effects, as these corrections are only applicable to plateau values of the diffusion coefficients, rather than the entire range of timescales.
Of particular interest are the early-and late-time plateau values of the diffusion coefficient, indicated on each plot by a cross or circle symbol, respectively. For the non-inertial FLD methods (nFLD), the early time regime is not physically real-istic, as this method is predicated on the assumption of having a much larger time step than the colloid inertial time scale τ C = m/6πμa; this point was discussed in greater detail in Sect. 3.1. Nevertheless, for sufficiently long times, the nFLD diffusion coefficients converge exactly to the iFLD values. The physical origins of these different diffusion regimes are well-known-the early-time plateau corresponds to the diffusion of colloids on length scales characteristic of interparticle separations, i.e. diffusion within a cage-like structure formed by neighboring colloid particles. The decrease in the diffusion coefficient at later times is associated with diffusing particles encountering neighboring particles and being unable to move past them. At sufficiently long times, particles diffuse past their neighbors, and a Fickian diffusion regime is recovered. Note that the very short-time ballistic regime encountered in atomistic systems is not accessible to either FLD variant, as this requires resolving a time scale shorter than colloid-solvent collisions, and the form of the Brownian force used in both variants of FLD assumes a much larger time scale. At the shortest times shown in Fig. 1, the diffusion coefficient is independent of volume fraction, as this corresponds to diffusion on length scales smaller than those at which colloid particles interact. With increasing volume fraction, inter-colloid collisions happen on shorter length (and time) scales, so that early-time values of diffusion are reached faster, and D early values are lower; similarly, late-time diffusivity values D late decrease, as higher colloid densities lead to greater hindrance to diffusion.
Qualitatively similar plots to those in Fig. 1 are obtained for all simulation methods. For ease of discussion, we present only D early and D late values as a function of volume fraction; while there are additional small differences in the behavior of the diffusion coefficient as a function of time, these plateau values suffice to provide quantitative comparisons of the different methods. Additionally, the values of the early-and latetime diffusion coefficients obtained in this manner are known to be sensitive to the number of colloid particles N in the system, with a dependence that scales as N −1/3 [8,75,129]. Laddy [129] has proposed an accurate correction for these finite-size effects, where the true infinite-size diffusion coefficient D ∞ is given by: Here, D(N ) is the diffusion coefficient measured based on the plateau values in Fig. 1, D 0 = k B T /6πμ 0 a is the Stokes-Einstein diffusion coefficient, μ 0 and μ are the viscosity of the pure solvent and the suspension, respectively and φ is the volume fraction. For the viscosity of the suspension μ, we use the shear viscosity measured at Pe = 0.1; see Sect. 4.2.
For simplicity, we do not retain the D ∞ notation, and leave it to be understood that diffusion coefficients for FLD, MPCD and DPD in all subsequent discussions have been corrected in this manner. In Fig. 2, we present early-and late-time diffusivity values for the FLD and LD methods (see Sect. 3.1). The noninertial versions (nFLD and BD) yield the same results (data not shown). Results from theory [216,223], experiments [150,168] and other simulation works [74] are presented as reference values. We include Brownian dynamics results from the work of Foss and Brady [74], which, in agreement with our own LD simulations, show the need to account for hydrodynamic effects. Note that LD results need not be corrected for finite size effects. While some variation exists among theoretical, experimental and simulation reference values for both D early and D late , it is clear that the iFLD method yields diffusion coefficient values in excellent agreement with the expected values. In particular, the agreement in late-time diffusion coefficients between the iFLD method and the more rigorous Stokesian dynamics method is excellent. We therefore use the iFLD results as a benchmark for the remaining methods.
In Fig. 3, we plot the early-and late-time diffusion coefficients obtained using several variations on the MPCD method, all of which were discussed in Sect. 3.2. We have also tested the variants with the inclusion of "virtual particles" (see discussion near the end of Sect. 3.2), but found no appreciable effect on diffusion characteristics. Even though the key physical properties of the background MPCD fluid (viscosity, density) are the same in all cases, and the colloid particle boundaries are equally well-defined, there are clearly strong effects of all the methodological details tested.
While no clear systematic trends emerge from Fig. 3, several general comments can be made. First, with the exception of the SRD/large collision time step/reverse boundary conditions scheme, all other variations attempted here fail to reproduce the iFLD results. However, in most cases the discrepancy is not drastic (∼20-30 %). Second, stochastic boundary conditions (Eqs. 26 and 27) always lead to larger colloid diffusion coefficients as compared to reverse boundary conditions (Eq. 28). In all cases, this leads to an overestimate of the late-time diffusivity. We therefore advocate the use of reverse boundary conditions, and limit the discussion of viscosity that follows to simulations based on this treatment. Third, the small collision time step generally leads to larger values of the diffusion coefficient, and a better match to the iFLD results is obtained with the large collision time step. This is somewhat surprising, as the large collision time step leads to unrealistically low Schmidt numbers for both algorithms, but the effect appears to be minor. Fourth, the MPC-AT collision scheme tends to yield diffusion coefficient values that are mostly larger than those resulting from the SRD collision scheme.
Overall, the MPCD method yields qualitatively reasonable results, and in one case (SRD/large collision time step/reverse boundary conditions), a near-perfect match to iFLD data. However, given the differences observed as a result of the different variants, it may well be that the match in this case is fortuitous. It is not clear that the finite size corrections given in Eq. (33) apply equally to all MPCD variants, particularly at finite Schmidt number, where hydrodynamic correlations may decay much differently. Additional simulations with larger numbers of colloid particles may elucidate this, but these are beyond the scope of the present Fig. 4, we summarize colloid diffusion results for DPD using three sets of colloid-solvent coupling parameters (see Table 3). We were unable to obtain late-time diffusion coefficient values for a volume fraction of 0.4 (the diffusivity drops to near zero at long times), and the short-time values at this volume fraction are likewise unreliable. This is a result of the finite size of DPD particles, which results in an artificial gel-like state of the system at high volume fractions. At lower volume fractions, the agreement with FLD values is quite good; both early-and late-time values are best reproduced by the parameter set DPD-v3, while DPD-v1 and DPD-v2 overestimate the diffusion coefficients. Presumably, with an alternate choice of parameters or an alternate functional form of the DPD-colloid interaction potential (viz. Eq. 31), even better agreement for both short-and late-time diffusivity can be attained. Additionally, some of the more sophisticated versions of DPD discussed in Sect. 3.3 (FPM method and Lowe-Andersen thermostat) can be expected to yield further improvements. However, in all cases, parameters and coupling schemes cannot be chosen directly, and must be calibrated using an ad hoc approach. We therefore identify the lack of a systematic procedure for the selection of DPD-colloid coupling parameters as a major shortcoming of the method.
Shear viscosity
Shear viscosity was computed for all simulation methods, using a NEMD/Lees-Edwards approach for FLD and the Müller-Plathe method [160] for DPD and MPCD. Additional details are provided in Sect. 3.4. In all cases, we express the shear rateγ in terms of the Peclet number, i.e. the ratio of the applied shear forces to diffusion forces (Pe = a 2γ /D 0 = 6πμ 0 a 3γ /k B T ). We note that this is only for numerical convenience, since all other quantities (μ 0 , a) are equal for all cases. In Fig. 5, we plot the suspension viscosity normalized by the solvent viscosity μ 0 as a function of Pe for various volume fractions, as calculated using iFLD and MPCD. In all cases, nFLD and iFLD yielded the same results to within statistical error, so we only show data for the latter.
At low volume fractions, the shear viscosity computed by MPCD is in excellent quantitative agreement with that produced by iFLD. As the volume fraction increases, the agreement remains strong, but with some notable discrepancies. The small collision time step for both the SRD and MPC-AT collision schemes results in a lower suspension viscosity in all cases as compared to the iFLD values; the agreement is better with the large time step. This is to be expected given that the small collision time step simulations tend to overestimate the diffusion coefficient (indicating a lower effective viscosity). In all cases, the well-known shear thinning effect is observed at higher volume fractions and moderately Early-time diffusivity Late-time diffusivity high shear rates [39,233]. This effect has traditionally been explained as a result of shear-induced layering of particles, although recent work suggests alternative mechanisms [233]. Overall, the MPCD method appears to be an excellent tool for the prediction of rheological properties of colloidal suspensions; however, it is limited to moderate shear rates such as those tested here (Pe < 100), as higher shear rates result in shear thinning of the MPCD fluid (data not shown); while many fluids of interest, including the LJ solvent tested here do exhibit shear thinning, it occurs at much higher Pe values. Figure 6 shows a similar comparison of shear viscosity for iFLD and DPD. For clarity, we only show results for DPD-v2 and DPD-v3, since DPD-v1 diffusion results are similar to DPD-v2, but consistently overestimate the diffusion coefficient. At low volume fractions, the agreement with iFLD is strong for both DPD-v2 and DPD-v3, as expected given the agreement in the diffusion coefficient values. However, at higher volume fractions, the agreement breaks down significantly, as was the case for diffusion characteristics. The higher volume fraction cases correspond to a regime in which the typical inter-colloid separation is on the order of the size of DPD particles, which leads to a highly structured gellike state. For sufficiently high shearing rates, the collective motion of colloid and DPD layered particles yields signifi- cant shear thinning. However, suspension viscosities at low Pe values cannot be reliably measured, as the statistical convergence of the velocity profiles requires unfeasible simulation time. At high volume fractions, we were not able to reach a sufficiently low Pe value to obtain the zero shear rate value of the viscosity. As such, DPD is an adequate method at low volume fractions, but breaks down at volume fractions above ∼ 0.3. The choice of DPD-colloid coupling method will not yield significant improvements in this area, as this effect is inherently related to the finite size of DPD particles. An obvious solution is to decrease the size of the DPD particles (r c ) relative to colloid particles, but this requires a higher density of DPD particles to maintain the same viscosity, and therefore a significant increase in computational costs.
Equilibrium colloid microstructure
As a final point of comparison, we also quantify the equilibrium microstructure of colloid suspensions using the radial distribution function g(r ). In simulation studies of colloid suspensions, it is often of interest to compare pair distribution functions in non-equilibrium conditions (i.e. under shear), particularly in different directions (e.g. velocity gradient and vorticity directions) [39,126,233]. We do not carry out such an analysis here, as it does not advance our methodscomparison appreciably. Equilibrium g(r ) plots are shown for the various methods at volume fractions of 0.1 and 0.4 in Fig. 7. The radial distribution function data corresponding to MPCD is only shown for one variation of the method (MPC-AT collision scheme, small collision time step, reverse boundary conditions), as other variations do not lead to appreciable differences in g(r ).
Clearly, the FLD and MPCD methods yield the same equilibrium colloid suspension structure at both low and high volume fractions; DPD on the other hand predicts a significantly different structure. With FLD, no solvent particles are present, and FLD pairwise and isotropic terms only modify the dynamics of the suspension, but do not affect structural properties. Similarly, the point-mass particles used to represent solvent in all MPCD methods do not have a significant effect on equilibrium structural properties; at sufficiently high MPCD number densities, depletion-like forces can be expected for particles in close contact, but they do not appear to be significant here. This is likely because the inter-colloid repulsive potential used here prevents such close particle approaches from occurring. With DPD, the finite size of the solvent particles and the conservative forces used to couple DPD and colloid particles clearly have a strong effect on the structural and thermodynamic properties of the suspension. The inter-colloid interaction potential is effectively modified by the presence of DPD particles, as shown by the resulting radial distribution functions. At high volume fractions, where the DPD particles and the range of DPD-colloid interactions approaches the inter-colloid separation, this leads to a highly-structured, gel-like suspension, which explains the breakdown of the DPD method for diffusion and shear viscosity calculations in this range. A related and more detailed discussion of the effects of explicit versus implicit solvents on inter-colloidal interactions is given by Grest et al. [86].
Overall, it is clear that the FLD method yields more accurate diffusion coefficients than any variant of the MPCD and DPD methods that we have tested. A significant challenge for the latter two methods is reproducing the desired physical characteristics of the fluid, in particular dynamic properties such as viscosity. Given the variations in diffusivities observed as a result of changes in the solvent-colloid coupling scheme for both MPCD and DPD, it appears that enforcing the correct colloid particle size and boundary conditions is also problematic. The precise nature of the difficulties in this regard is not clear from the results presented here for either MPCD or DPD, and requires additional investigation and development of these methods. With FLD, both the solvent viscosity and particle size are direct inputs to the simulation, which makes for a trivial parameterization procedure.
Despite discrepancies between explicit solvent methods and FLD with regard to diffusion coefficient values, viscosity data for certain variants of MPCD are in excellent agreement. It appears that for the system tested here, the unusually low Schmidt numbers (order unity) of the large collision time step variants of MPCD do not have a significant adverse effect; however, for more realistic fluids, this is bound to become a concern for all MPCD-based methods. With DPD, good agreement with iFLD viscosity results is obtained at low vol- ume fractions, but the method breaks down at high volume fractions due to the finite size of DPD particles, which is comparable to the inter-colloid separation distance. This can be overcome with a larger number of smaller DPD particles, but the computational advantage of the method is then diminished. Despite the ability to reproduce a higher Schmidt number using DPD, the method overall appears to be a less physically accurate model for colloid suspensions as compared to MPCD.
Computational performance
The computational performance of the different simulation methods is summarized in Table 4. All methods are implemented in the LAMMPS software package [176], as described in Sect. 3. The data in Table 4 are based on runs carried out on the Sandia National Laboratories Red Mesa system, a cluster of Intel Xeon 5500 processors with InfiniBand connectivity. Short equilibrium simulations were carried out for the system described in the preceding section, i.e. 200 colloidal particles interacting with a purely repulsive integrated Lennard-Jones potential. DPD data correspond to the DPD-v1 system, and MPCD data correspond to the MPC-AT collision scheme with a large time step and reverse boundary conditions. While there are some differences in speed for different variants of the MPCD method, they are small compared to differences among the three different solvent models.
As expected, the iFLD method provides by far the fastest performance of all the methods tested, while MPCD and DPD show comparable speed and parallel scaling. However, we note that for both volume fractions shown, there are close to ten times more MPCD solvent particles than DPD particles.
We have used a constant number of colloid particles (200) and varied the size of the simulation box to achieve different volume fractions. This results in the explicit solvent methods being significantly faster at higher volume fractions, where much fewer solvent particles are needed. In contrast, both iFLD and nFLD become slower at higher volume fractions, since the computational effort in these methods is dominated by the calculation of pairwise colloid-colloid interactions; at higher volume fractions, particles are in closer proximity, leading to more frequent near-field pairwise lubrication interactions. In both cases, the nFLD method is significantly slower, due to the matrix problem that must be solved at every time step (see Eq. (18)). As already mentioned, we have used a similar integration time step size for both methods, as the integration time step in this case is limited by the inter-colloid potential. This leads to unphysical diffusion results at short times for the nFLD method, which is predicated on the use of a much larger time step. For the situation considered here, the iFLD method is clearly the better choice; however, for cases where a much softer colloid interaction potential is used, the nFLD method can potentially allow for the use of a time step that is orders of magnitude larger than iFLD; as a result, nFLD can be a much more expedient method in terms of total simulation time achieved, even if each time step requires more wall time to compute.
As previously discussed, a plethora of solvent parameter choices are possible for both the MPCD and DPD methods, which can have significant impacts on computational performance. In this work, several parameters were selected for optimal computational speed; as shown in the previous three subsections, the relatively large size (and consequently lower number density) of DPD particles leads to some undesirable effects, particularly at high colloid volume fractions. For Values are wall times in seconds required to perform 10,000 time steps (lower number means better performance). Numbers in braces below each explicit solvent method in the title row indicate the total numbers of solvent particles in the system more realistic simulations, smaller DPD particles at higher number densities would likely be required, which could lead to a large increase in computational expense. In contrast, the MPCD results do not suggest any need for larger number densities of MPCD particles, except perhaps to achieve higher shear rates without shear thinning of the solvent, and higher Schmidt numbers. Although we have not attempted to do so, it is clear that in order to achieve the accuracy of MPCD using DPD, a much larger number of DPD particles would be required, leading to a drastically higher computational effort. We therefore consider MPCD to be the more computationally expedient method of the two, despite comparable performance data in Table 4. Even for a well-characterized system like the suspension of monodisperse hard spheres treated here, the quantitative prediction of seemingly simple properties such as equilibrium diffusion and shear viscosity is challenging. Based on the results presented, we find the FLD method to offer the best balance between accuracy and computational expedience. Presumably, more rigorous variants of the SD method would provide even more accurate results, albeit with some additional computational expense. However, the flexibility of the different simulation techniques beyond the simple system discussed above must also be considered when evaluating their overall use and future potential. The possibility of introducing more advanced capabilities in these and other similar simulation methods is therefore addressed in detail in the next section.
Advanced capabilities
Newtonian dynamics particle solvers coupled with any approach to hydrodynamic interactions all suffer from at least one fundamental underpinning that prevents general application to practical processing routes for integrating colloidal particles into useful materials. In this section we present the current state and outstanding challenges in what we believe are the key barriers to such general application: (1) non-Newtonian, complex solvent rheology (2) solvent drying/curing (3) non-spherical particles, and (4) complex flow geometries.
Despite these barriers, we want to stress that even in their current state, modeling and simulation tools can be quite useful in addressing practical problems. Processing colloidal dispersions into either highly-ordered films/structures or disordered particle compacts or composites involves complex flows (casting, coating, extrusion, mixing). Regarding the underpinning process flow, much can be learned from the work undertaken and reviewed in Sects. 3 and 4, as nearly all practical processes include colloidal diffusion and shearinduced (or viscometric) flow. Moreover, many systems of interest consist of Newtonian or nearly Newtonian solvent rheology and many colloidal particles are spherical or can be treated as such for practical purposes. Hence, computational studies such as the ones reviewed and taken up in Sect. 4 are useful for determining the effects of inter-particle potentials on flow rheology, volume fraction effects, and microstructural evolution tendencies. Moreover, these approaches can also be used to contrive and/or fit coarse-grained constitutive models for use in larger scale simulations.
Unfortunately solvents in real-world applications of nanocomposite fabrication are often non-Newtonian, and many particles are not spherical. New materials development is most often focused on the solid state (except for lubricants or liquid-crystals), and so drying and solidification is key to most processes. Finally, many processing flows (coating, extrusion, etc.) are not simply bulk shear, but a com-bination of shear and extensional in nature, together with wall-confinement effects. In this section we address these complications and provide some guidance based on existing literature and from our own experience on how best to proceed with existing technology, as well as research challenges that must be addressed.
Non-Newtonian solvents
The non-Newtonian response of a colloidal suspension to impressed shear, even when the solvent is Newtonian, is well known (see Sect. 4). Classic shear thinning is always observed in such systems. Even more extraordinary non-Newtonian responses are observed with large particle loadings and strong colloidal interactions (attractive and/or repulsive), including time-dependent relaxation, viscoelasticity, and even yield-stress behaviors. It is clear that the modeling and simulation community has made great strides in predicting such behaviors [75,91,197]. For processing flows associated with directed colloidal assembly (e.g. the work of Snyder et al. [208]) solvents are usually of low molecular weight (high vapor pressure) and demonstrate a Newtonian (constant viscosity) response to deformation. However, processing flows of this sort remain largely bench-top research, and are in rare instances practiced in industry for large-scale manufacturing. More often, dense suspensions are processed into particle compacts and composites for a number of applications, including the production of energy materials (e.g. batteries, fuel cells), catalytic materials (porous with high surface area), and other structural polymer, metal and ceramic materials. These suspensions are replete with binders and surfactants added for the express purpose of particle interaction control and binding/adhesion (through curing) upon solvent removal. Due to their higher molecular weight, these constituents result in non-Newtonian solvent response, especially during the curing process. Unfortunately, the modeling and simulation methods discussed above are not easily extended to account for such effects.
Extension of the hydrodynamics to accommodate timedependent relaxation and/or viscous response (the classic G and G quantities measured from oscillatory shear experiments [19]) has received the most attention. Clearly, any implicit solvent model based on the Stokes equation for a Newtonian fluid (see Eq. 3) is fundamentally problematic in this respect. This includes all pair-wise and isotropic terms for methods such as SD, FLD and BD. However, simple approximations can be used to account for certain types of non-Newtonian rheological behaviors in these models. Perhaps the simplest approach to this problem is to augment expressions for the hydrodynamic resistance tensor (see Eq. 10) with a frequency dependent solvent viscosity. To this end the equation of motion for a single colloid can be written as follows: The hydrodynamic term is written as a time convolution integral: The kernel κ(t) contains the non-Newtonian behavior of the solvent (note, in Eq. 1 the kernel is assumed to be κ(t) = δ D (t); hence, after evaluating the inegral one is left with a time-independent or quasi-steady drag). Finally the Brownian term must reflect the correlations due to the time dependence of the viscosity: In each of these expressions we have also assumed an FLD-type mean-field-like approximation for the volume fraction dependence of the viscosity via diagonal terms in the R 0 tensor (i.e., [R 0 ] ii ), while ignoring the lubrication terms R δ (see Eq. 9) for simplicity; although the lubrication terms can be included as well. This means that any non-Newtonian effects would not be accounted for in long-range hydrodynamic interactions. This approach of course assumes that the colloid particle size is significantly larger than the long-chain polymers which lead to the time-dependent solvent viscosity, or that, in accordance with assumptions of microrheology, the viscosity that the colloid feels is equivalent to the shear viscosity of the solvent [146]. One example of an algorithm to numerically simulate the so-called Generalized Langevin equation implemented in LAMMPS has been presented by Baczewski and Bond [6] who deployed a frequency (time) dependent viscosity with proper Brownian fluctuation terms (see also references therein).
Extending explicit solvent methods such as MPCD and DPD coupled to DEM colloidal solvers towards non-Newtonian solvents has also been the subject of research. Pryamitsyn and Ganesan [182] extended DPD to account for colloidal particle systems in viscoelastic solvents. Other groups have paid considerable attention to extending DPD to viscoelastic fluid mechanics, largely by connecting a portion of the particles with springs. This can either be interpreted as a direct representation of polymer additives [119,194,212] or as an abstract augmentation of the DPD algorithm that yields viscoelastic solvent behavior [210]. In a similar vein, Tao et al. [214] advanced the MPCD technique presented in Sect. 3.2 to viscoelastic fluids by connecting pairs of MPCD particles with harmonic spring-like potentials. The method is based on alternating streaming and collision steps, just like the Newtonian solvent implementation. They applied this approach to oscillatory shear flow and found the elasticviscous frequency response to be consistent with that of a Maxwell fluid. This work did not address interactions with other colloidal particles. Despite the ability of these types of models to qualitatively capture several features of viscoelastic behavior, no approach that we are aware of has been able to map the non-Newtonian rheology of a realistic solvent and yield quantitative predictions for real-life applications. The scaling and mapping of realistic solvent properties that were of significant concern even for the simple system in Sect. 4 are greatly compounded for these more complex solvent models.
Several groups have extended Stokesian dynamics (SD) as well as the more general boundary element method (BEM, see Sect. 2.2.1) to account for non-Newtonian and viscoelastic solvent behavior. The approach just described for DPD and MPCD, i.e. including an additional discrete element representation of particles with elastic character, was adopted by Binous and Phillips for SD [16,17]. By adding spherical dumbbells connected by finite-extension nonlinear-elastic (FENE) springs to the colloid suspension, they were able to recover the behavior of a Boger viscoelastic fluid-i.e. exhibiting elastic effects, but having a constant shear viscosity. In a different approach, an approximate analytical reformulation of SD to simulate a Maxwell fluid has been presented by Schaink et al. [192]. The more generalized Boundary Element Method has also been extended to viscoelastic solvents by Phan-Thien and Fan using an analytical treatment for an Oldroyd-B fluid [173], which in principle can be extended to other rheological models, but is limited to unbounded domains (or possibly domains with simple boundary conditions). A similar approach has been advanced in the continuum flow arena, viz. to use a microstate equation in the continuum which carries the response of elastic molecules to solvent deformation [98]. The continuum approach typically must be mesh-based (e.g. finite element, finite volume methods), or posed in a Lagrangian-particle framework, as is discussed below.
Immersed Boundary Methods (IBM) as recently reviewed by Lechman et al. [134] offer the most direct route to non-Newtonian solvent rheology, as the added, but expensive, benefit of a background or body-fitted mesh provides a route for several decades of mesh-based algorithm development to address non-Newtonian effects. Halin et al. [88] advanced the so-called Langrangian particle method for computing timedependent viscoelastic flows. Their method deploys either a differential constitutive equation (macroscopic approach) or a kinetic theory model (micro-macro approach) on a background mesh for the viscoelastic stress. The discrete Langrangian particles carry the stress state and its history through the interpolation, and they can serve in a dual role as suspension particles. Hwang et al. [99] pioneered an approach based on earlier work of Baaijens [5] to couple discrete particle motion with a uniform background FEM model of incom-pressible viscoelastic flow. They deployed Lagrange multiplier constraints to match the particle-fluid stress and solventmass-displacement boundary conditions on the particles. However, their application was restricted to two dimensions with no apparent effort to extend it to three. A recent method developed by Noble et al. [163] known as the conformaldecomposition finite element method (CDFEM) may offer the most general framework to address non-Newtonian solvent effects. They have made considerable progress towards scalability to parallel platforms and hence may be well positioned to incorporate such effects in flows of colloidal suspensions. A more recent IBM-like approach was advanced by Chrispell and Fauci [40] and applied to a complex flow geometry (refer to Sect. 5.4). Using a finite-difference, markerin-cell Navier-Stokes solver they developed a method to integrate an Oldroyd-B constitutive equation. They successfully applied their approach to peristaltic pumping of colloidal suspensions at reasonably high Weisenberg numbers. In summary, IBM methods coupled with colloidal DEM offer a straightforward route to solve this problem, but with the requirement of at least 100-1,000 particles in a threedimensional meso-scale simulation, computational expediency remains an outstanding issue with this approach.
Accounting for non-Newtonian solvents in colloidal dynamics solvers is clearly still an outstanding multiscale challenge. From the most general, straightforward approaches based on coupled FEM/FDM and DEM to the more esoteric extensions of FLD, SRD, and DPD, the research challenges seem not surprisingly to lead to the need for computational efficiency and rheological accuracy. FLD, with all of its merits vis-à-vis DPD and SRD, will face the same potentially unsolvable challenges, mainly due to its quasi-analytical origins, a potential pitfall that extends equally to SD and BEM.
Non-spherical particles
Colloidal particles are often spherical due to the manner in which they are synthesized. In both high-temperature gasphase reactions and solution-based nucleation growth, spherical shapes are thermodynamically favorable. As already discussed, the spherical particle assumption mathematically enables and is in fact the foundation of most DEM approaches that include contact and colloidal forces. On the other hand, particles larger than colloids are commonly aspherical as fabrication routes in this regime are often based on pulverization and milling. Larger particles made by spraying/atomization and drying also often end up ellipsoidal due to drying stresses. In the colloidal size regime, however, recent topical interest in so-called nano-materials has drawn attention to processing particles like nanotubes (e.g. carbon nanotubes), nanowires (e.g. silicon) and graphene flakes [136,198] ated hydrodynamics to accommodate such shapes has come to the forefront. In addition to nano-materials, some creative solution-based approaches to producing mildly aspherical shapes like di-spherical colloids, or di-colloids, have emerged [111]. However, colloidal particles of this sort are rarely encountered in large-scale manufacturing, perhaps because the process of making them has not been scaled up. Highly aspherical colloidal particles can also be fabricated with nano-imprinting processes. Caldorera-Moore et al.
[32] used imprint lithography to make a variety of shapes of hydrogel particles. Again, such approaches hold promise for small batches of particles used for applications like drug delivery, but they represent a specialized case visà-vis modeling requirements for concentrated dispersions used in nanocomposites. While the development of modeling approaches that address generalized particle shapes is an important challenge to be met in the broader DEM space, most colloidal applications today deal with easily parameterized shapes, such as spheres, cylinders, and plates.
There are a number of ways to construct and parameterize models of non-spherical particles, as illustrated in Fig. 8. To first order, arbitrary shapes such as a di-colloid can be approximated with dumbbell assemblies, or if a smooth, single particle is required with a spheroid, as shown in Fig. 8a, d. Clearly, if the goal is to represent a hexahedral body, these simple shapes do not capture important features, such as sharp corners and flat surfaces, which can have a significant effect on microstructure and dynamics. A more generic and accurate representation can be accomplished with collections of spheres (overlapping or non-overlapping) [81,178], which are then constrained to move as rigid bodies, as in Fig. 8b, c. This generalization is also applicable to cylinders, rods and sheet-like objects. The corrugation that results from this approach is sometimes undesirable, as it can lead to artificial inter-particle stacking at close range. The most generic approach involves representing a shape using a surface triangular mesh, with the possibility of an interior tetrahedral volume mesh, as shown in Fig. 8e, f. The LAMMPS software package [176] provides the basic computational infrastructure to accommodate all of these representations. What remains a challenge, however, is a workflow which allows for building a collection of such non-spherical particles to be used for a mesoscale simulation. Several helpful algorithms for generating and using libraries of arbitrary shapes have been presented in literature [81,133,178], but the process is highly dependent on the problem at hand, and typically requires some effort on the part of the analyst.
Regardless of the representation, extensions for accommodating aspherical particles are difficult to implement in most simulation methods for two reasons. First, the method itself may be predicated on the spherical particle assumption, as is the case in quasi-analytical hydrodynamic treatments underpinning Brownian Dynamics, Stokesian Dynamics and FLD. Second, soft/long-range colloidal forces are difficult to reconcile without generalized and expensive surface-tosurface distance calculations required to determine the pairwise force. Significant work has been pursued to address both of these areas, as we discuss below.
Hydrodynamic interactions in implicit solvents pose a significant challenge. The Stokesian Dynamics (SD) method and its expedients, such as FLD, are limited to spheres due to their analytical underpinnings. However, dicolloid shapes in suspensions have been modeled with SD-related extensions [125,127]. Meng and Higdon [153,154] have generalized to plate-like particles using an extension to SD that approximates the hydrodynamics based on a planar assemblage of hard-spheres (viz. no colloidal interaction). An extension of their work added Brownian motion to the model [154]. They reported the effects of Peclet number (shear rate) and volume fraction on the effective suspension viscosity. Additionally, SD has been extended to prolate spheroids [41,42] and fibres [30,190]. Nevertheless, the SD approach and all related methods are fundamentally limited to relatively simple particle shapes that are amenable to analytical treatment. Unfortunately, generalizations of these methods, like the Boundary Element Method (BEM), which easily accommodate nonspherical shapes, are often too expensive to be practical for large systems. On the other hand, explicit solvents such as MPCD and DPD can in principle be used in conjunction with any arbitrary particle shapes. The challenge there lies in developing efficient solvent-colloid collision detection and surface interaction algorithms that enforce no-slip boundary conditions at the particle surface. This can largely be resolved for simple flow geometries with MPCD [24], and significant effort has been expended for DPD [58,174,185]. Further investigations for more complex flow situations involving the various particle representations discussed above are warranted for these methods, but the difficulties appear to be manageable.
Two alternative modeling routes exist that are feasible for general application to aspherical particles. First, atomistic models have no such limitations regarding particle shape, as both solvent and colloid can be represented at the atomistic scale. Presuming accurate atomistic potentials are available for all atom-atom (solvent and particle) interactions, addressing nonspherical colloidal systems is feasible [115] but limited to just a handful of colloidal particles. In short, an atomistic approach is currently too computationally intensive to address mesoscale systems of particles in a suspension at large enough length scales to predict bulk properties. At the other extreme, a second approach with no such particle shape limitations would be direct continuum formulation for particle-solvent interactions coupled with grid/elementbased immersed boundary methods (cf. Sect. 5.1). In this case the mechanics of each particle can be handled in the con-tinuum, i.e. particle motion and deformation [162] or with a discrete element approach [134]. Particle-solvent interactions can be accommodated with a variety of approaches as recently reviewed by Noble et al. [163] Beyond atomistic approaches, no such work was found on coarse-grained mesoscale modeling of graphene flake suspensions or carbon nanotubes. The major challenge here would be the approximate long-range inter-particle forces (which may be modulated by the presence of polymers/additives, in addition to their hydrodynamic interactions), as well as obtaining accurate representations of the shapes (triangular meshes or composite spheres). Because of the simpler shapes, solving a microstate orientation equation in the continuum together with the appropriate constitutive relations in a FEM/FDM framework is another option [141,142], but detailed mechanisms of aggregation/agglomeration cannot be obtained in this approach.
Perhaps the most significant challenge pertains to models for the proper colloidal interaction potentials for nonspherical particles. More specifically, the challenge is to develop integrated colloidal potentials that can be efficiently and accurately applied to coarse-grained colloidal model systems. For mildly aspherical particles like spheroids, several colloidal potentials are available [68]. For composite spherical representations (see Fig. 8c), long-range inter-particle interactions can in principle be treated as a sum of pairwise interactions among constituent spheres, but the physical relevance of such an approach is questionable, especially for cases of highly overlapping spheres. On the other hand, short-range (granular) contact potentials can be readily implemented in this context [81,178].
While several atomistic studies addressing this challenge have been undertaken [93], the best and most accurate approach remains to be determined. in't Veld et al. [221] recently compared pair-wise interactions in atomistic simulations of composite nanoparticles with integrated spherical forms. They compared pairwise forces between colloidal particles composed of smaller aggregated LJ atoms to the integrated case due to Everaers [68]. They determined that simply computing the interaction via the Lennard-Jones atoms on the surface of the composite colloidal particle led to incorrect temperature/vapor-pressure behavior, unlike the properly integrated case. Similar results were observed for composite nanoparticles, with the atom-atom interactions truncated at a finite distance. In summary, while DLVO and related potentials developed for spherical systems can be extended with minor modifications for mildly aspherical systems (spheroids, dicolloids, etc.), much work remains for generic shapes. A promising alternative yet to be explored in great detail may be the surface element integration method [15] which addresses in detail the colloidal interactions between spheres and plate-like particles. Fig. 9 Phases of coating and drying of colloidal suspensions: a Convective assembly process for highly ordered films, b fast drying and fully densified films lead to c capillary stresses
Drying and solidification
Addressing the underlying mechanics of processing colloidal suspensions into functional materials (films, composites, fibers) requires not just a firm understanding of rheological behavior of the suspension, as discussed in Sect. 5.1, but also the stability of the suspension under processing and during solidification. Solidification typically starts with volume reduction through drying. Two distinct processing routes are of technological relevance here: the first is aimed at the production of highly-ordered films (monolayers or multiple layers) through what is known as colloidal directed assembly, viz. self assembly influenced by external forces. Although a number of approaches have been explored that deploy electromagnetic forces, the most noteworthy and scalable route is so-called convective-assembly/drying [208,229], which deploys a metering blade to apply the film and subsequent drying to assemble the ordered layer(s) of particles (see Fig. 9). In this case, the initial suspension is highly dilute in order to achieve sufficient particle mobility. Unfortunately, long-range ordering is prone to thermally-induced defects, and so this process is typically run slowly (of order 1 cm/min in typical coating processes). The second distinct drying/processing route is often used in the production of disordered nanocomposites. Dispersions typically being cast into such materials are highly loaded (of order 20-40 % or higher by volume) in order to reduce energy requirements and to force the microstructure to be amorphous. At high particle concentrations, long-range ordering is not the goal. Typically a plethora of high-speed coating and drying processes can be brought to bear. Managing suspension rheology is paramount to successful processing, and subsequent drying and curing is often fraught with defects (residual stress and cracking).
Modeling and simulation tools at the continuum scale have been central to process design of coating and drying for decades [179,196]. Colloidal particle concentration tracking during drying of drops and films has been treated with suspension balance and population balance approaches at the continuum scale [29,156]. However, this work is aimed at understanding the connection of processing to microstructure, which is best achieved with meso-scale models of the sort we have discussed above. At this scale, modeling the effects of a true drying process is still in need of attention.
Foremost are the challenges associated with model formulations that best represent an actual drying process. If the concern is basically the physics underpinning uniform, bulk volume reduction, then this can be achieved with a uniform compression of the simulation box in a meso-scale model, using the usual periodic boundary conditions in all directions. For implicit solvent methods like FLD, these simulations are straightforward without any significant modeling advances, and essentially mimic the case in which solvent is removed from the volume in a uniform way. If particle diffusion is much faster than the volume reduction rate, this simple simulation does represent a slow drying regime, viz. one in which the diffusion time scale τ D = a 2 /D 0 is much less than a/v int , where v int represents the wall speed and a the particle radius. With explicit solvent methods, the solvent particles at the boundaries must be deleted from the system in a way that maintains thermodynamic consistency. These considerations can affect the temperature of the system. Some consideration should be made of latent heat consumption (cooling) during drying, as some workers have addressed at the atomistic [38] and meso-scales [184].
Actual drying or solvent removal drives the predominant solvent and particle transport in one direction relative to the drying interface. Whether the medium is a drop, fiber or film, drying-induced microstructural formation is still a threedimensional problem and ripe for exploration with mesoscale models (see Fig. 9). Special boundary conditions on the dry-ing interface must be contrived for such simulations. Boundaries normal to the interface can be treated as periodic, but at the interface solvent must be removed in a thermodynamically consistent way, depending on the drying regime. We identify two distinct drying regimes: -"Slow drying"-particle diffusion rate much faster than the drying rate -"Fast drying"-drying rate much faster than the particle diffusion rate Within the "fast drying" regime of colloidal systems, as pictured in Fig. 9, there are several phases of the process with distinct physical descriptions: -Constant rate regime-low volume fraction of colloids, which do not impede diffusion path for solvent significantly -Falling rate regime-particle concentration increases at the drying front (free surface), impeding solvent mobility and slowing evaporation -Percolated network-solvent recedes in particle space and drying controlled by pore-diffusion/flow of solvent In a slow-drying regime the rate of drying continually falls, but less precipitously, as no surface layer ("skin") forms. When drying is slow, simply moving a single boundary along its normal direction at the desired drying speed for implicit solvent methods, and/or deleting solvent particles in explicit methods commensurately with the moving boundary, is useful for exploring the microstructural implications. In the case of atomistic explicit solvents such as LJ or atomistically represented water, particles naturally form a liquid-vapor interface, and move from the liquid to the vapor phase; the evaporation rate can be controlled by removing particles from the vapor phase at various rates [38]. If drying is "fast", then the effect on the drying rate of surface-microstructure (skin) formation must be accounted for. Clearly, explicit solvents should capture this effect: evaporation of particles from the surface, or boundary, will induce diffusion of solvent particles to feed the evaporation. That diffusion, if fast enough, leads to heightened convection of colloids towards the surface, induces tension in the liquid, and ultimately leads to a compression of particles near the surface. No known mesoscale approach deploying an implicit solvent (e.g. FLD) can capture this effect due to their quasi-analytical nature. Challenges also remain for coarse-grained explicit solvent approaches related to local depletion in regions where colloids are closed-packed. Simulations of drying processes clearly require additional effort in algorithmic development.
Some relevant work in this area is noteworthy. Interfaces between dense (liquid) and gaseous regions can be modeled with coarse-grained explicit solvents, provided that some energetic interaction among solvent particles exists. If additional energetic interactions between solvent molecules and colloids are present, these approaches can also account for capillary interactions among colloids. However, colloidsolvent interactions must be adjusted appropriately in order to reproduce realistic capillary effects, and the choice of parameters is not straightforward. Most of this work deploys lattice-Boltzmann solvents [112,113,199], but is typically limited to small systems. For models where colloid-solvent energetic interactions are not present (such as MPCD or FLD, as well as some variants of DPD), capillary effects will not arise spontaneously. Instead, additional capillary forces must be applied to particles at the interface (in the case of implicit solvent, no true interface exists, but it can be simulated for the slow drying regime with a moving flat wall, as described above). These capillary forces are based on approximate analytical solutions to the Young-Laplace equation, and have been studied extensively by Kralchevsky and coworkers [47,[120][121][122] and more recently by Vassileva et al. [220]. The resulting expressions have been used in analytical equilibrium models [135] as well as dynamic simulation methods [79]. In the latter case, Fujita and Yamaguchi used an immersed boundary method to couple the resulting DEM to a Navier-Stokes solver, a rigorous but computationally expensive approach. They do however capture the dryinginduced convective effects critical in fast-drying regimes.
As drying proceeds to a complete percolated, stresssupporting network, evaporation continues as the solvent recedes into the film (Fig. 9b). Capillary stresses can further induce consolidation at this point, and in some cases particles can deform and even crack. At a larger scale the film can form defects (mud-cracking). Simulating this problem at the mesoscale requires granular contact potentials (e.g. Hookean/Hertzian with friction) and possibly some accommodation for particle deformation and even grain-boundary diffusion (at high temperature). Theoretical treatments in the continuum have been reported, such as the quasi-analytical approach of Singh and Tirumkudulu [204]. Additionally, network approaches with statistically-based bond-breaking probabilities [149,206] are limited to simplified phenomenological models.
To make matters more complicated, in most practical applications, drying for composite material production is accompanied by the curing of a binder. We found no relevant literature in which this effect was accommodated in mesoscale simulations. With coarse-grained explicit solvent approaches, the chemistry underpinning curing would be difficult to implement, as it is often based on free-radical or condensation reactions, which would require the tracking of species concentration, etc. One could implement some sort of viscosity increase through a thermal effect, which would account for some of the dynamics of curing during the consolidation of particles. This is readily implemented in implicit solvent models (including SD, BD, FLD), so long as a suitable rate model for the viscosity is available. Accounting for curing effects, including their contribution to the final residual stress of a film, remains an outstanding research challenge for DEM mesoscale modeling.
Variable complex flow geometry
Up to this point we have only addressed the applicability and extensibility of mesoscale DEM modeling in simple geometries, with very little discussion of their application to flows not simply driven by Brownian motion or impressed simple shear in simulation boxes with obvious periodicity. The drying problems discussed in Sect. 5.3 present the first need for non-standard boundary conditions (non-periodic), and it is clear that much work is needed, especially with respect to the quasi-analytical, implicit solvent formulations like FLD. Extending the methods highlighted in this paper to accommodate no-slip walls, non-periodic boundaries that account for drying fronts, or flow regimes that are not simple bulk shearing is of great interest for other applications as well.
As with the other advanced capabilities addressed herein, a general approach with no restrictions and complete generality in complex geometries is to solve the Navier-Stokes equations in the actual application of interest. With meshbased methods like finite element, or lattice-particle methods (LB) and even with the explicit solvent methods discussed earlier, such simulations in principle can be carried out, so long as wall interactions with the solvent particles can be designed to produce the desired boundary behavior (e.g. no slip). That is, there are no inherent limitations to imposing no-slip boundaries or other sources of impressed flow (pressure driven flow, stretching flow, etc.). Recent work by Zhao et al. [237] used SRD and DPD successfully in modeling electro-osmotic flow in a microfluidic cell. The same methods have also been recently extended to complex geometries and free surfaces between two phases (like capillary free surfaces), as motivated by related work using lattice Boltzmann techniques [1,112,113]. Although implicit solvent models such as SD and its expedients can be extended to include the effects of planar walls [203], more general boundaries cannot easily be treated, and require significant theoretical and algorithmic modifications.
Regarding generalized Navier-Stokes solvers coupled with DEM particle solvers, the work of Sasic et al. [191] is noteworthy. Building on a finite volume solver and the volume-of-fluid multiphase flow model, similar to the finite element based approach of Baaijens [5], Sasic et al. advance a method to couple a particle solver using the volume-offluid marker function to imprint the volume displacement on the solvent mesh. The method is similar to a multiphase flow approach, but seems to be limited to a small number of particles. In this same generalized framework, Chrispell and Fauci [40] used finite difference methods and markerand-cell free boundary tracking. In general, these grid-based approaches are computationally intensive when coupled with DEM, and thus limited to small systems until work on scalability to massively parallel platforms is undertaken. However, as with all such FEM, FDM, or FVM solvers, these approaches can accommodate all boundary conditions.
What remains an outstanding challenge in the rheological arena is the simulation of mixed shear and extensional deformations, or even pure extensional deformation. No obvious periodic simulation domains exist to impress a purely extensional flow. However, the work of Kraynik and Reinelt [123] advanced a clever method to do just this for the flow of foams, even though it seems to only recently have been discovered for use in atomistic simulations. Their work solves the latticecompatibility condition for planar extension. While straightforward for simple shearing flow, this condition is subtle for extensional flows. At the time of this writing no such work in the meso-scale particle flow arena has taken advantage of this work.
In conclusion on this topic, the most pressing challenge, which may in fact not have an elegant solution, is the extension of implicit solvent methods such as SD and FLD to boundary and physical conditions often encountered in real-life applications. Specifically, because these methods are quasi-analytical, extension of the underpinning theory to accommodate long-range hydrodynamic interactions between particles and walls, and particles and free surfaces is difficult and may only be accessible with mesh-based or particle-based solvents. With regard to explicit solvent methods such as MPCD and DPD, some challenges remain in selecting surface-solvent interactions to enforce the desired boundary conditions, but in principle arbitrary flow geometries can be simulated.
Conclusions
We have presented an overview of particle-based mesoscale simulation techniques for colloidal suspensions. In Sect. 4, we focused on a quantitative comparison of three commonly used techniques: FLD, an implicit solvent method that represents a significant simplification of Stokesian dynamics, as well as DPD and MPCD, two distinct explicit solvent approaches to simulating coarse-grained fluids. We deployed each method to model thermal motion (diffusion) and viscometric flow (shear) of suspensions with a prescribed solvent viscosity. Our approach stands in contrast to some previous works, in which only key non-dimensional parameters were reproduced, but various specific properties and parameters (e.g. viscosity and particle diameter) were not explicitly controlled or mapped to a particular solvent [11,22,36,169,181,217]. While this latter approach would likely have been adequate for the simple diffusion and bulk shearing simulations presented in Sect. 4, the exact mapping of solvent properties is an essential step in simulations of more realistic, complex systems.
With FLD, both the solvent viscosity and particle size are direct inputs to the simulation, thus making for a trivial mapping procedure. With MPCD, the availability of analytical expressions greatly simplifies the selection of MPCD fluid parameters, and the particle size is in principle a direct simulation input for the colloid-solvent coupling schemes considered here; however, in practice, it turns out that both colloid diffusivity and viscosity are sensitive to many algorithmic details of the MPCD method, and no clear choice can be identified to produce the desired results. With DPD, solvent properties cannot be accurately predicted a priori for a given set of parameters, and must be measured from simulations of the pure DPD fluid. Furthermore, no clear choice or physical basis for colloid-solvent coupling exists, and ad hoc potentials must be introduced, with parameters that can only be selected based on a trial and error approach. While this is feasible for the systems considered in this work, it becomes unwieldy for more complex cases (e.g. polydispersity, nonspherical particles, etc.), and any parameters thus selected do not transfer readily to different situations. As such, we conclude that FLD and similar Stokesian dynamics-based methods are in general superior to explicit solvent methods with regard to mapping of key physical parameters. We identify this as a notable shortcoming of both MPCD and DPD in the simulation of colloid suspensions, and an area that requires further method development.
With regard to accuracy, the FLD method is once again superior as measured by the accuracy of the early-and latetime diffusivities at various volume fractions. FLD results are in excellent agreement with theoretical and experimental predictions, whereas notable deviations are apparent in both MPCD and DPD data. With MPCD, most variants and parameters tested yield values that are in reasonable agreement with the expected diffusivities, but no clear picture emerges as to systematic effects of different parameters. With DPD, parameters can in principle be selected to match the target diffusivity data at low volume fractions, but the method fails at higher volume fractions. MPCD and FLD viscosity data are in excellent agreement, but MPCD is limited to moderate shear rates; while DPD performs well with regard to viscosity at low volume fractions, it once again fails at high volume fractions due to the finite size of DPD particles.
Given the superior accuracy and performance of SD-based methods such as FLD over explicit solvent methods, it would seem that the former is a clear choice for colloidal suspension simulations. However, the merits of explicit solvent methods become more apparent when moving away from the canonical system of monodisperse spheres in an infinite domain to more complex, realistic systems, such as those discussed in Sect. 5. Although the difficulties noted for the simple systems persist in these cases, there is significant flexibility to be gained through the use of explicit solvents. In particular, modeling hydrodynamics for non-spherical particles and complex flow geometries is relatively straightforward with explicit solvent methods, whereas only a limited number of simple shapes and geometries can be treated using SD-based implicit solvent methods. Explicit solvents extend more readily to models of solvent evaporation, although simple models of curing can be more readily implemented in the context of FLD. Finally, modeling solvents with non-Newtonian viscosity appears to be more mature for MPCD and DPD, although much work is needed for all methods to achieve realistic complex solvent rheologies.
Overall this work provides a direct comparison of several particle-based simulation methods for colloid suspensions. For simple systems, Stokesian dynamics-like methods, such as FLD, are the clear method of choice; presumably, more rigorous variants of the SD method yield even better results. We have quantified a number of shortcomings for MPCD and DPD explicit solvent methods even when applied to a suspension of monodisperse spherical particles. We feel that these deficiencies can be resolved, and that the potential for applications to realistic systems is promising. While greater flexibility can be attained using direct numerical simulations of solvent hydrodynamics (e.g. finite element solutions of relevant continuum equations), the computational simplicity and ease of parallelization for particle methods cannot be matched. In general, we believe particle-based methods such as those discussed in this work will remain an important tool for the simulation of colloidal suspensions. | 30,682.4 | 2014-05-22T00:00:00.000 | [
"Materials Science",
"Physics",
"Chemistry"
] |
An algebraic characterization of self-generating chemical reaction networks using semigroup models
The ability of a chemical reaction network to generate itself by catalyzed reactions from constantly present environmental food sources is considered a fundamental property in origin-of-life research. Based on Kaufmann’s autocatalytic sets, Hordijk and Steel have constructed the versatile formalism of catalytic reaction systems (CRS) to model and to analyze such self-generating networks, which they named reflexively autocatalytic and food-generated. Recently, it was established that the subsequent and simultaenous catalytic functions of the chemicals of a CRS give rise to an algebraic structure, termed a semigroup model. The semigroup model allows to naturally consider the function of any subset of chemicals on the whole CRS. This gives rise to a generative dynamics by iteratively applying the function of a subset to the externally supplied food set. The fixed point of this dynamics yields the maximal self-generating set of chemicals. Moreover, the set of all functionally closed self-generating sets of chemicals is discussed and a structure theorem for this set is proven. It is also shown that a CRS which contains self-generating sets of chemicals cannot have a nilpotent semigroup model and thus a useful link to the combinatorial theory of finite semigroups is established. The main technical tool introduced and utilized in this work is the representation of the semigroup elements as decorated rooted trees, allowing to translate the generation of chemicals from a given set of resources into the semigroup language.
1 arXiv:2207.05335v1[q-bio.MN] 12 Jul 2022 1 Introduction Questions about the origin of life are as fascinating as they are difficult to even attempt to answer.There are at least two schools of thought on how to approach such questions.The first one is to construct minimal models involving concrete chemicals, best exemplified by the RNA world hypothesis formulated by Gilbert (1986), Joyce (1989) and many others.The great advantage of such concrete models is that they can be tested experimentally, going all the way back to the classical experiments by Miller (1953) and Oró (1961).However, there can never be certainty about any hypothesized model, and even the most convincing ones such as the RNA world hypothesis lack reliable data with regard to their first appearance, cf.Joyce (2002); Penny (2005).An alternative school of thought is focused on working out the minimal requirements which any sensible theory of the origin of life should satisfy.Prominent proponents of this approach are Oparin (1957), Dyson (1999), Kauffman (1986), and many others.However, already the formulation of a meaningful theoretical framework is challenging and there have been various attempts including (M, R)-systems by Rosen (1958), hypercycles by Eigen (1971), autopoetic systems by Varela et al (1974), chemotons by Gánti (1975) and autocatalytic sets by Kauffman (1986).A common feature that all frameworks have in common is the importance of autocatalysis and the occurrence of autocatalytic cycles as discussed in the review by Hordijk and Steel (2018).The catalytic reaction system (CRS) formalism by Steel (2000); Hordijk and Steel (2004) is a versatile framework that, motivated by Kauffman's autocatalytic sets, captures the essence of several of the aforementioned approaches.It has been used to compute thresholds for the occurrence of selfgenerating and self-sustaining motives in CRS based on the level of catalysis by Hordijk et al (2010Hordijk et al ( , 2011Hordijk et al ( , 2012Hordijk et al ( , 2015)); Hordijk andSteel (2017, 2018) and even for the analysis of the metabolic network of E. Coli by Sousa et al (2015).
In the companion article by Loutchko (2022), it has been shown that CRS have an algebraic structure that is generated by the simultaneous and subsequent function of chemicals acting as catalysts on the CRS.It was then shown how a naturally defined discrete dynamics yields the maximal self-sustaining set of chemicals for any given CRS and a characterization of the lattice of functionally closed self-sustaining sets of chemicals was derived.This article aims to achieve the same for self-generating sets of chemicals, which is a stricter notion than that of self-sustainment and requires more mathematical care.In this regard, the main technical contribution of this article is to construct a representation of the semigroup elements as decorated rooted trees as they are naturally suited to deal with the generation of chemicals from a set of externally supplied chemicals.
Mathematical outline
The construction of the semigroup models is based on the CRS formalism introduced by Hordijk and Steel (2004); Hordijk et al (2011).A CRS is given by the datum of a chemical reaction network, i.e. a finite set of chemicals X together with a finite set of reactions R where each reaction r ∈ R is determined by the set of its reactants dom(r) ⊂ X and products ran(r) ⊂ X.Additionally, catalysis data is specified by a set C ⊂ X × R meaning that for each (x, r) ∈ C, the reaction r is catalyzed by the chemical x, and a food set F ⊂ X of constantly supplied chemicals is given.A CRS is said to be RAF (reflexively autocatalytic and food-generated) if each chemical in the CRS can be generated from the food set F by a series of catalyzed reactions.A set of chemicals is said to be RAF if the CRS supported on it is RAF.The notion of RAF formalizes self-generating reaction networks in the framework of CRS.Details on CRS are given in Section 2.1.
In Section 2.2, it is shown that the reactions and the catalytic functions of chemicals have the structure of a semigroup, which is additionally equipped with a partial order and an idempotent addition.The semigroup operation corresponds to subsequent functionality whereas the addition corresponds to simultaneous application of functions.More precisely, to each reaction r ∈ R a function φ r is assigned as the set-map φ r : X → X on the power set X := P(X F ) of non-food chemicals X F = X \ F .The function φ r gives the set of non-food products of r if and only if the set of non-food reactants of r is contained in its argument.Such functions have the usual composition given by (φ r • φ r )(Y ) = φ r (φ r (Y )) and an idempotent addition given by (φ r + φ r )(Y ) = φ r (Y ) ∪ φ r (Y ) for all Y ⊂ X F and r, r ∈ R.They generate the semigroup model S R = φ r r∈R .
To each of the chemicals x ∈ X, a function φ x : X → X is assigned by using the catalysis data: The functions of the chemicals generate the semigroup model which is a subsemigroup of S R .The objects S R and S are semigroups with respect to both + and •, hence they are called semigroup models.
The elements of the semigroup models are partially ordered via φ ≤ ψ iff φ(Y ) ⊂ ψ(Y ) for all Y ⊂ X F .Lemma 2.13 states the the partial order on the semigroup models, the partial order on X, and the two operations • and + are all compatible.A central notion is the function Φ Y ∈ S of a set of non-food chemicals Y ⊂ X F which is defined as the unique maximal element of the subsemigroup S(Y ) = φ x x∈Y ∪F of S. The function Φ Y captures all catalytic functionality that can be exerted by Y and the food set on all other chemicals of the CRS.
Section 3 provides more insight into the structure of the semigroup models.The basis is the definition of a tree algebra T(A) with a decorating algebra (A, •, +) as follows: The objects in T(A) are rooted trees, whose edge labels are arbitrary elements in A. The vertrex labels are determined by these edge labels: All leaves are labelled by the multiplicatively neutral element id.At each non-leaf vertex the labels of the outgoing edges are multiplied with the labels on the respective child vertex and the sum is taken over all the outgoing edges.This is illustrated in Fig. 1A.The addition of trees is performed by identifying their roots, with unchanged labels at the edges, as illustrated in Fig. 1B.The multiplication of trees T 1 •T 2 is carried out by replacing all leaves of T 1 with copies of T 2 .Again, all edge labels are unchanged, as illustrated in Fig. 1C.
The tree algebras relevant for semigroup models have their edges labelled by the generating sets of the respective models, i.e. they are T := T({φ x } x∈X ∪ {0}) and T R := T({φ r } r∈R ∪ {0}).The main result of the section is Theorem 3.7, which states that there is a commutative diagram of homomorphisms T whereby the surjective evaluation map ev sends the root label to the corresponding semigroup element and the map τ is defined based on the formula φ x = (x,r)∈C φ r .More precisely, τ replaces an edge with the label φ x by edges labeled by φ r for each (x, r) ∈ C and a copy of the child tree of the original edge is attached to each of the new edges, as illustrated in Fig. 1D.A tree representing a semigroup element is a lift of the element via the evaluation homomorphism ev.The algebraic reason for the existence of such representations is the interplay of the two operations • and + via the right distributivity φ Loosely speaking, the trees in T R correspond to "reaction mechanisms", which proceed recursively from the leaves to the root such that a reaction labeling an edge occurs subsequently with the "mechanism" of its head vertex and such that all reactions labeling edges with same tail are carried out simultaneously.Thus, it is natural to assume that a chemical x ∈ X F can be generated from the food set if there is a reaction mechanism for its generation, given by a tree T ∈ T R .This translates to x ∈ ev(T )(∅) in this setup.And indeed, it is proven in Lemma 3.12 that this property is equivalent to the standard definition of generation from the food set.
In Section 4, it is shown how the representation of semigroup elements by decorated rooted trees can be used to describe CRS with the RAF property by the simple condition Φ X F (∅) = X F (Theorem 4.1).This implies that for a RAF set of chemicals X F ⊂ X F , the property X F ⊂ Φ X F (∅) holds (Corollary 4.2) and, moreover, that the equality X F = Φ X F (∅) is a sufficient condition for X F to be a RAF set of chemicals (Proposition 4.3).Then, a generative dynamics on X is defined by Y → Φ Y (∅) and, as one of the main results, it is proven that the dynamics with initial condition given by X F leads to the maximal RAF set of chemicals.Finally, new insights and conjectures gained from the semigroup approach to CRS with the RAF property are discussed.It is shown that the generative dynamics with the initial condition given by a RAF set of chemicals X F leads to a fixed point X * g F , which contains X F .If X F X * g F holds, then X F is not stable because its own catalytic function will produce all chemicals in X * g The statement of the Theorem 4.14 is that the lattice of functionally closed RAF sets of chemicals is given by In the concluding Section 5, the importance of the representations of semigroup elements by decorated rooted trees is discussed, and the biochemical significance of functionally closed RAF sets of chemicals is illustrated.For example, one would expect chemicals which are uniquely contained in a minimal functionally closed RAF set of chemicals to be involved solely in the functionality of the respective RAF set, whereas chemicals that have multiple minimal functionally closed RAF sets of chemicals containing them are more likely to be involved in communication and interaction between the respective RAF sets.This can potentially carry information on the evolutionary role of the respective chemicals.This is an illustration of how the semigroup models can be used to discover new concepts in CRS theory.In future work, such concepts will be applied to CRS corresponding to real biological systems.
Semigroup models
The construction of semigroup models and their elementary properties are provided in Section 2.2.They are based on the catalytic reaction system (CRS) formalism, which is introduced in Section 2.1.This is a condensed version of the Sections 2. and 3. from the introductory companion article by Loutchko (2022).Only the RAF property (Definition 2.5) and the extended semigroup model S R (Definition 2.10) are newly introduced here.
The CRS formalism
The introduction of the catalytic reaction system (CRS) formalism and of the reflexievly-autocatalytic and food generated (RAF) property are based on the work of Hordijk and Steel (2004).The notion of CRS is designed to capture the catalytic functionality within a given chemical reaction network.It does not take into account detailed kinetic or thermodynamic information.
Definition 2.1.A catalytic reaction system (CRS) is a tuple (X, R, C, F ) where X is a finite discrete set of chemicals, R is a finite set of reactions, C ⊂ X ×R is the catalysis data for the reactions R and F ⊂ X is the constantly present food set.Each reaction r ∈ R is given by a pair (dom(r), ran(r)) of mutually disjoint subsets of X, called the domain and the range of r.The elements of dom(r) are called the reactants and the elements of ran(r) are the products of r.For a pair (x, r) ∈ C, the reaction r is said to be catalyzed by x and x is said to be a catalyst of r.The food set F is required to satisfy the following closure property: (C) All reactions r ∈ R with a catalyst in F must involve chemicals outside of F as reactants, i.e. they must satisfy dom(r) ∩ (X \ F ) = ∅.If X = F , the CRS is said to be trivial.
Example 2.2.Fig. 2 shows a representation of a CRS as a directed bipartite graph.This representation is used throughout this article.The chemicals are represented by solid vertices and the reactions r = (dom(r), ran(r)) are represented by circles.For each reaction, there are directed edges from each chemical in dom(r) to the reaction vertex and from the reaction vertex to each chemical in ran(r).The catalysis data (x, r) ∈ C is indicated by a dashed directed edge from the chemical x to the reaction r.The food set is indicated by a circle around the food chemicals.with the analogous definitions for ran(R ) and supp(R ).
From now on, a CRS (X, R, C, F ) will be fixed.When referring to any of the four sets X, R, C or F , it is implicitly assumed that they are part of the full data of the CRS.It will be convenient to abbreviate the non-food chemicals as and to make the same definition for any subset X of X containing F , i.e.X F := X \ F .Moreover, given a set X F ⊂ X F , the symbol X will denote the set X F ∪ F ⊂ X.
Definition 2.3.For a set X F ⊂ X F of non-food chemicals, define the restrictions of R and C as In the article by Loutchko (2022), a broader notion of subCRS is introduced.This notion is, however, not needed in this work as the focus will be exclusively on subCRS generated by sets of non-food chemicals.Note that the subCRS according to Definition 2.3 is always closed in the terminology used by Hordijk and Steel (2017), i.e. all reactions of the full CRS with support on X are actually contained in the respective subCRS.Now the central notions of food generated CRS and reflexively autocatalytic and food generated (RAF) CRS are introduced following Hordijk and Steel (2004Steel ( , 2017)).However, the definitions given by Hordijk andSteel (2004, 2017) are centered around the set of reactions R, whereas the definitions given here involve the whole CRS.In Remark 2.9, the relation to the definitions used in this work is discussed.The F property formalizes the idea that all chemicals of the CRS can be generated from the food set.The RAF property means that the generation from the food set can be achieved with catalyzed reactions only.
Definition 2.4.A CRS (X, R, C, F ) has the food generation property (F property) if each x ∈ X F is generated by some sequence of reactions from F , i.e. if the following condition is satisfied for each x ∈ X F : (F) There exist sets of reactions R 1 , ..., R n ⊂ R with the following properties: Definition 2.5.A CRS (X, R, C, F ) is refelxively autocatalytic and food generated (RAF) if it is has the F property and if for each chemical x ∈ X F , the sets of reactions R 1 , . . .R n ⊂ R featured in the condition (F) can be chosen to be subsets of π R (C).In other words, the reactions in R 1 , . . .R n are all required to be catalyzed.
Remark 2.6.The notion of self-generation is stronger than the one of selfsustainment.Self-sustaining CRS are treated within the semigroup formalism by Loutchko (2022).Self-sustainment requires the CRS to have a catalyzed set The RAF condition is stronger than this, because one can set R x := ∪ n i=1 R n for the reactions featured in condition (F) and R = ∪ x∈X F R x will satisfy the requirement for self-sustainment.On the contrary, there are self-sustaining CRS which are not self-generating.
The definition of the RAF property descends to sets of non-food chemicals Example 2.8.The CRS in Fig. 2 is RAF and thus X F = {c, d, e} is a RAF set of chemicals.Moreover, there is a RAF subset of chemicals consisting of X F = {c, d}, because d catalyzes the formation of c from the food set and c reacts with the food set to form d, which is catalyzed by the food set.
Remark 2.9 (Relation to the notion of RAF commonly used in the literature).The Definition 2.4 of the F property given here coincides verbatim with the one commonly used in the CRS literature.The Definition 2.5 of the RAF property is equivalent to the definitions of a closed1 RAF set of reactions given by Hordijk andSteel (2004, 2017) modulo the inclusion of uncatalyzed reactions in the set of reactions R in the definition given here.Hordijk andSteel (2004, 2017) define the RAF property for subsets of R as follows: One can easily lift the restriction of the RAF sets of reactions being closed by defining subCRS with sets of chemicals X to allow for arbitrary sets of reactions R ⊂ R | X .This construction is given by Loutchko (2022).
The semigroup model of a CRS
The chemical reactions of a CRS have a natural algebraic structure given by the simultaneous and subsequent occurrence of reactions, as well as combinations thereof.Making this mathematically precise leads to the notion of an extended semigroup model S R of a CRS.The function of a chemical is defined by the simultaneous occurrence of all the reactions it catalyzes.All combinations of subsequent and simultaneous functions of chemicals give rise to the semigroup model S of a CRS.The construction of the semigroup models is motivated by the work of Rhodes and Nehaniv (2010) in spirit, but technically the objects constructed here differ significantly, cf.Loutchko (2022), Remark 3.4.
Throughout this section, let (X, R, C, F ) be a CRS.The state of the CRS is defined by the presence or absence of each of the non-food chemicals, i.e. by a subset Y ⊂ X F .Therefore, the state space X of the CRS is the power set A reaction r ∈ R acts on the state space via its function for all Y ⊂ X F .Two maps φ, ψ : X → X can be composed via the addition +, which is defined as 2) for all Y ⊂ X F .This operation is associative, commutative and idempotent.Moreover, the multiplication • is given by the usual composition of maps Finally, the function φ x : X → X of a chemical x ∈ X is defined as the sum over all reactions catalyzed by it via (2.4) Recall that the full transformation semigroup T (A) of a finite discrete set A is the set of all maps {f : A → A}, where the semigroup operation • is the composition of maps.The semigroup model of a CRS is a subsemigroup of T (X) and is defined as follows.
Definition 2.10.The semigroup model S of a CRS is a subsemigroup of T (X) generated by the functions {φ x } x∈X through the operations of addition and composition, i.e. S is the smallest subsemigroup of the full transformation semigroup T (X) closed under • and + that contains {φ x } x∈X and the zero function, given by 0(Y ) = ∅ for all Y ⊂ X F .It is denoted by Analogously, the extended semigroup model of the CRS is generated by the functions φ r of all reactions r ∈ R.This model is denoted as As subsemigroups of T (X), the semigroups S and S R are finite.The objects S and S R are called semigroup models, because they are semigroups with respect to both operations • and +.The correct description in terms of universal algebra is, however, an algebra of type (2, 2, 0), cf.Almeida (1995).The semigroup model S R contains S as a subalgebra of type (2, 2, 0) and this will be expressed by saying that S is a subsemigroup model of S R .
Remark 2.11.In addition to the two algebraic operations, there is a natural partial order on S R and S, given by φ There is an important subsemigroup of S generated by the functions of chemicals in a given set X F ⊂ X F together with the food set: Definition 2.12.For a subset X F of X F , the semigroup model S(X F ) < S generated by the functions of X F is defined as The semigroup models satisfy the following elementary properties.These properties follow directly from the definitions.However, if necessary, the proofs for the respective statements on S can be found in Loutchko (2022), Section 3.2., and the proofs for S R are analogous.
(S1) All elements φ ∈ S R respect the partial order on X given by inclusion of sets, i.e.
(S5) The operations • and + on S R have the following distributivity properties: hold for any φ, ψ, χ ∈ S R .(S6) The right distributivity in Equation (2.7) holds more generally for arbitrary elements φ, ψ, χ ∈ T (X).(S7) Φ X F is the unique maximal element of S(X F ).In particular, S has a unique maximal element, given by Φ X F .(S8) The functions of sets Remark 2.14.Any subCRS (X , R | X , C | X , F ) generated by the set of chemicals X F has a semigroup model given by Definition 2.10, which will be denoted by S (X , R | X ).It is a subsemigroup of the full transformation semigroup T (P(X F )) on the power set of X F .Any element φ ∈ S (X , R | X ) can be extended to a function ext(φ) ∈ T (X) via Together with the property (S2) this yields the inequality This finishes the summary of the elementary properties of the semigroup models.In the next section, a representation of the semigroup elements, which is well suited to deal with the condition (F) in food generated CRS, is constructed.
Semigroup Models as Decorated Rooted Trees
This section is dedicated to the construction of a representation of elements of S as decorated rooted trees.It forms the technical basis for the proofs in the next section.Albeit the main idea of this section is rather straightforward, the verification of all the claimed properties requires some care.Therefore, the reader might prefer to skip this section up until Theorem 3.7 during the first reading.
The general idea developed in this section is as follows: The edges of the rooted trees are labeled by functions in a subset of the full transformation semigroup T (X).Each vertex is labelled by the sum of the functions on the outgoing edges multiplied with the functions of the respective head vertices (Definition 3.1, see Fig. 3 for an illustration).Moreover, there are operations of addition and multiplication (Definition 3.3 and Fig. 4) on the set of decorated rooted trees that are compatible with the addition and multiplication of the semigroup elements on the root (Lemma 3.5).The addition of two trees is performed by identifying their roots, and the multiplication is given by replacing the leaves of first tree with copies the second tree.Finally, to establish a relation to the semigroup models S R and S, the edge labels are chosen from the generating sets {φ r } r∈R ∪ {0} and {φ x } x∈X ∪ {0}, respectively.This idea is also sketched in the mathematical outline in the introductory Section 1.The main Theorem 3.7 of this section establishes that both classes of decorated rooted trees are compatible with the algebraic structure of the semigroup models.The merit of this construction is that the F and RAF properties of a CRS can be reformulated in terms of decorated rooted trees and then directly cast into the language of semigroup models (Lemma 3.12).
The following notations and conventions with regard to rooted trees will be used.Let T = (V, E, t) be a rooted tree with vertex set V , edge set E ⊂ V × V and root t ∈ V .Edges (v, w) ∈ E are directed from v to w.Here, v is called the tail of e and w is its head.For each vertex v ∈ V , let ch(v) ⊂ V denote the set of children of V , which is defined as ch(v) := {w ∈ V such that (v, w) ∈ E}.Also, denote by T v the subtree of T rooted at the vertex v.The level lv(v) of a vertex v is the length of the path from the root to v and lv n (T ) ⊂ V denotes the set of all vertices of a given level n.Moreover, the non-standard notation elv n (T ) ⊂ E denotes the set of edges of level n, which are all the edges whose head vertex has level n.The notation ht(T ) denotes the height of the tree, i.e. the length of the longest path from the root to a leaf.Finally, lf(T ) is the set of all leaves of T , which is given by lf(T ) := {v ∈ V such that ch(v) = ∅}.An edge (v, w) ∈ E is said to be terminal if the vertex w is a leaf.Definition 3.1.For any subset A ⊂ T (X) of the full transformation semigroup T (X), an A-decorated rooted tree T = (A, V, E, t, ω V , ω E ) is a finite rooted tree with vertex set V , edge set E, a root t ∈ V and two maps where ω V is recursively given by The addition and multiplication in the definition of ω V takes place inside T (X) as previously defined (cf.Equations (2.2) and (2.3)).Fig. 3 illustrates this construction.
Decorated rooted trees will be referred to as trees.For the set of edge labels A ⊂ T (X), denote the set of all A-decorated trees by T(A).Also denote the set of all A-decorated trees of height n by T(A) n and of height at most n by T(A) ≤n .A subtree is defined as follows.
Fig. 3 Example of a decorated rooted tree with decorations from the generating set The labels of the edges determine the labels on the vertices recursively: At each vertex, a sum over the labels of its children, multiplied by the labels on the respective connecting edges, is taken.The edges are labeled to the left of the respective edge and the resulting labels of the vertices are on the right of the respective vertex.The root is labelled by the function φa which respects the labels on the edges, i.e.
holds for all e ∈ E .
The set T(A) is equipped with two operations: Loosely speaking, given two trees T 1 , T 2 ∈ T(A), their sum is defined by identifying the roots of T 1 and T 2 and their product by replacing each leaf of T 1 with a copy of T 2 .Definition 3.3.Let T 1 , T 2 ∈ T(A) be two A-decorated rooted trees given by the data Define the tree T + := T 1 + T 2 with data T + = (A, V + , E + , t + , ω + V , ω + E ) by identifying the roots of the two trees, i.e. by There is a canonical map The edge set E + is defined as with the decoration map Because the restriction of + to E 1 E 2 is one-to-one, this map is well-defined.The map ω + V is given by the relation (3.1) with E + and ω + E instead of E and ω E .The construction is illustrated in Fig. 4A.
Moreover, define the tree T by replacing each leaf of T 1 with a copy of T 2 .The data on T • is given as follows.
where the equivalence relation ∼ relates each leaf l ∈ lf(T 1 ) ⊂ V 1 with the root t 2 ∈ V 2 of the respective copy of V 2 indexed by l.Again, there is a canonical map and the edge set is defined as instead of E and ω E .This construction is illustrated in Fig. 4B.The set T(A), together with the two operations • and +, is referred to as the tree algebra T(A).Remark 3.4.It follows directly from the definition of the addition and multiplication of trees that the operations are associative.Moreover, the addition is commutative and the right distributivity holds by construction.
The algebraic structure on T(A) thus defined is compatible with the algebraic structure on T (X) by mapping a tree T ∈ T(A) to the label on its root Lemma 3.5.The map ev : T(A) −→ T (X) is a homomorphism with respect to addition + and multiplication •.
Proof The notation from Definition 3.3 is used.Let T 1 , T 2 ∈ T(A) be two A-decorated rooted trees.Let T + = T 1 + T 2 .By construction of T + , the projection π : V 1 V 2 → V + is injective on all vertices except on the root.Moreover, π respects the level of a vertex, i.e. lv(v) = lv(π(v)), and the decoration function for vertices v of level 1 satisfies This yields the homomorphism property for addition By construction, T 1 is a subtree of T • and thus the respective vertices and edges of T • and T 1 can be identified.It is now shown inductively that for all v ∈ T 1 , considered as a subtree of T • , the relation holds.For all leaves l ∈ lv(T 1 ), the relation holds by construction.For the induction from vertex level n (with 1 holds by definition, the second line is the induction hypothesis, and the third line follows from the right distributivity of the operations, cf.property (S6).In particular, the homomorphism property ω Of particular importance are the trees decorated by the generating sets {φ x } x∈X ∪ {0} and {φ r } r∈R ∪ {0} of S R and S. The respective tree algebras are denoted by There is a map with nice algebraic properties between the tree algebras which is defined based on the relation φ x = (x,r)∈C φ r between the edge labels.First, τ maps the trivial tree with one vertex in T to the trivial tree in T R .Next, let T φ be the decorated rooted tree with one edge which is labelled by φ.The tree T φ is said to be the atomic tree with label φ.For an atomic tree T φx ∈ T, the label function φ x can be uniquely decomposed as a sum of functions corresponding to reactions according to its definition, cf.Equation (2.4): Thus, τ (T φx ) is defined as the sum of the corresponding atomic trees A tree T ∈ T of height one can be written as a finite sum of atomic trees, i.e.T = m j=1 T φx j , and the map τ on T 1 is defined as An arbitrary tree T ∈ T n of height n can be written as T = m j=1 T φx j • T j for atomic trees T φx j and trees T j ∈ T ≤(n−1) of height ≤ (n − 1).The map τ is defined recursively as (3.5) The substitution process is illustrated in Fig. 5A and an example of the construction T → τ (T ) for the CRN in Fig. 5B is given in Fig. 5C. 5 Illustration of the construction of the map τ : T → T R .A Illustration of the general procedure of replacing an edge of T ∈ T with label φx by an edge for each summand in φx = (x,r)∈C φr = m j=1 φr j with labels φr j .This is performed recursively starting with the terminal edges and working upwards toward the root.B An example CRS with the functions of chemicals given by φa = φr 1 + φr 2 , φc = φr 1 + φr 3 and φ d = φr 3 .C The map τ applied to the tree T on the left for the CRS featured in B.
Lemma 3.6.The map τ : T → T R defined above is a homomorphism with respect to the addition and multiplication of trees.Moreover, the label of the root ω V (t) is invariant under τ for any tree T ∈ T, i.e., using the evaluation map defined in (3.3), the relation ev(T ) = ev(τ (T )) holds.
Proof Let T 1 , T 2 be two nontrivial trees in T. They can be written as T 1 = m j=1 T φx j • T j and T 2 = l j=m+1 T φx j • T j with atomic trees T φx j .Their sum Proof The homomorphism property follows from the Lemmata 3.5 and 3.6.The commutativity of the diagram has also been proven in Lemma 3.6.The evaluation maps are surjective because the generators {φx} x∈X ∪ {0} ⊂ S and {φr} x∈R ∪ {0} ⊂ S R have preimages given by the atomic trees {T φx } x∈X ∪{T 0 } ⊂ T and {T φr } r∈R ∪ {T 0 } ⊂ T R combined with the fact that the tree algebras are closed under the operations of addition and multiplication.
The finiteness of S yields the following corollary.
Corollary 3.8.There is an N such that the set of trees of height at most T ≤N maps surjectively onto S ∪ {id | X }.
Moreover, Theorem 3.7 implies that the elements of S and S R can be represented as decorated rooted trees by lifting the respective semigroup elements via the homomorphism ev.Definition 3.9.A tree representative of an element φ ∈ S is an element T ∈ T such that ev(T ) = φ.The representative is called minimal if it has no subtree T such that ev(T ) = φ.The analogous definition holds for tree representatives of elements of S R in T R .Remark 3.10 (Biochemical interpretation of a tree).A tree T ∈ T R of level n corresponds to a "reaction mechanism" of the network which can be described as follows: The reactions at the terminal edges are carried out and their products are supplied to their tail vertices.For each vertex, once it has received the products from all its outgoing edges, these products act as reactants for the reaction on its incoming edge.This procedure is carried out iteratively for the levels of the tree and therefore takes n − 1 steps for a tree of height n.For a tree T ∈ T, the respective reaction mechanism is the reaction mechanism described τ (T ) ∈ T R .
Finally, the decorated rooted trees in T R can be used to reformulate the F property given in Definition 2.4.In particular, the condition (F) can be encoded in a tree: Definition 3.11.Let x ∈ X F be a chemical for which the condition (F) holds.Let R 1 , . . ., R n be the sets of reactions featured in (F) and denote by T φr the atomic trees for the functions φ r .Define the trees T (3.6) and for 1 < i < n: The tree T R n is said to be the F-tree for the element x.It is denoted by T R (x).
Lemma 3.12.Let x ∈ X F be a chemical for which the condition (F) holds.
Then for the F-tree T R (x), the relation holds.In other words, T R (x) represents a reaction mechanism that produces x from the food set.
Proof Let T R i be the trees from Definition 3.11 with T R n = T R (x).It will be shown inductively that holds for all i = 1, . . ., n and therefore holds for all i = 1, . . ., n − 1.The inclusion (3.9) follows from (3.8) together with the conditions (F1) and (F2).Then, the claim x ∈ ev T R n (∅) will follow from the inclusion (3.8) together with the condition (F3).
For i = 1, the definition of T R 1 gives From condition (F1), i.e. dom(R 1 ) ⊂ F , it follows that ran(R 1 ) = r∈R1 φr(∅) ⊂ ev T R 1 (∅) ∪ F .And from condition (F2), i.e. dom(R 2 ) ⊂ ran(R 1 ) ∪ F , together with (F1), it follows that dom(R where the final inclusion is obtained from the induction hypothesis 4 Characterization of self-sustaining and self-generating CRS In this section, the representation of semigroup elements as trees is used to derive a succinct expression for the maximal RAF set of chemicals of a CRS as the fixed point of the generative dynamics Y → Φ Y (∅) with the initial condition Y 0 = X F (Theorem 4.8).In Section 4.1, it is shown that a CRS if RAF if and only if Φ X F (∅) = X F holds (Theorem 4.1) and that the condition Φ X F (∅) = X F is sufficient for a set of chemicals X F ⊂ X F to be a RAF set of chemicals (Proposition 4.3).The latter statement is the key statement to prove that the fixed point of the dynamics, which is introduced in Section 4.2, satisfies the desired properties.The importance of fixed points of the dynamics as functionally closed and therefore biologically relevant entities is discussed in Section 4.3.This whole section follows a logical structure which is analogous the structure of Section 4 in Loutchko (2022), where the analogous statements are proven for self-sustaining CRS.However, the treatment of self-generating CRS is technically more involved, which is forced by the fact that the F property is more involved than the self-sustainment property of a CRS, cf.Remark 2.6.Throughout this section, fix a CRS (X, R, C, F ) and let S be its semigroup model.
Characterization of CRS with the RAF property
A CRS with the RAF property can be conveniently characterized via the set of chemicals generated by the maximal function of its semigroup model from the food set.
Proof If the CRS is RAF, then by Lemma 3.12, the function ev(T R (x)) satisfies x ∈ ev(T R (x))(∅) for all x ∈ X F .This function is an element of S R but not of S in general.The RAF property allows to construct a tree T (x) ∈ T such that ev(T R (x)) ≤ ev(T (x)) and thus x ∈ ev(T (x))(∅): Choose a catalyst y(r) ∈ X for each reaction r ∈ R i for all R i featured in the condition (F) for x ∈ X F .In analogy to the formulae (3.6) and (3.7), define and for 1 < i < n: with the atomic trees T φ y(r) ∈ T and set T (x) := Tn.The properties (S1), (S2) and (S3) ensure that ev(T R (x)) ≤ ev(T (x)).The function satisfies X F ⊂ Φ(∅) and thus the equality Φ(∅) = X F holds.Therefore, Φ is the maximal function Φ X F of S and the claim Φ X F (∅) = X F holds.
To prove the reverse, assume that Φ X F (∅) = X F holds.Choose a representative T ∈ T for Φ X F , i.e. a tree T such that ev(T ) = Φ X F , and consider its image τ (T ) ∈ T R .Fix a chemical x ∈ X F .By Theorem 3.7, the relation holds.Choose a subtree T min (x) ∈ T R of τ (T ) which is minimal under the condition x ∈ ev(T min (x))(∅). (4.4) The existence of T min (x) follows from the existence of τ (T ).The sets R 1 , . . ., Rn featured in the condition (F) are constructed as follows: Let the height of T min (x) be n and define the set R i to contain the reaction corresponding to the labels of all edges whose heads have level n + 1 − i for 1 ≤ i ≤ n, i.e.
for some e ∈ elv n+1−i (T min (x))}, (4.5) where ω E is the decoration function for the edges of T min (x).By the minimality of T min (x), the conditions (F1) and (F2) must be satisfied (reactions in any of the R i which do not satisfy the conditions could be omitted from the tree without violating the condition (4.4) thus contradicting the minimality of T min (x)).The condition (F3) holds by construction of T min (x).Finally, all reactions appearing as edge labels of T min (x), and thereby all reactions in the sets R 1 , . . ., Rn, are catalyzed because this holds for τ (T ) by construction.This concludes the proof.
Corollary 4.2.If X F ⊂ X F is a RAF set of chemicals, then the inclusion and yields the claim when the functions above are applied to the empty set.
For a RAF set of chemicals X F , the inclusion X F ⊂ Φ X F (∅) can be strict and therefore, in general, the equality X F = Φ X F (∅) is not satisfied.However, it is a sufficient condition for X F to be a RAF set of chemicals.
Proposition 4.3.If the equality X F = Φ X F (∅) holds for a set of chemicals X F ⊂ X F , then X F is a RAF set of chemicals.
Proof The proof is analogous to the second part of the proof of Theorem 4.1.As in that proof, let T ∈ T be a tree representative for the function Φ X F ∈ S of minimal height and let T min (x) ∈ T R be a minimal subtree of τ (T ) that satisfies x ∈ ev(T min (x))(∅) for x ∈ X F and has the same height as T .Moreover, let T be chosen such that all its edge labels are contained in the generating set {φx} x∈X of S(X F ) (this is always possible since T represents an element of S(X F )).This leads to the sets of reactions R 1 , . . ., Rn defined by (4.5) and satisfying the condition (F) (the verification of this condition is analogous to the verification in the proof of Theorem 4.1).One only needs to ensure that all reactions r contained in the R i satisfy supp(r) ⊂ X , i.e. that they are elements of R | X , which is now shown: The domain of each R i for 1 because the edges corresponding to the reactions with domains which are not contained in the set on the right hand side could be removed from T min (x), which would contradict its minimality (in the above formula, ω V is the vertex decoration function of T min (x)).Therefore, it follows inductively that Consider the functions By construction of T min (x) as a subtree of τ (T ) of the same height, the function φ R i is bounded from above by corresponding function φ i constructed from T φ i := The φ i are elements of S(X F ) and are thus bounded from above by Φ X F .This leads to the inclusion Together with the properites (F1) and (F2), this yields supp(R i ) ⊂ X for all sets R i .
This proposition will be used to show that the fixed points of the dynamics, defined in the next section, are RAF sets of chemicals.
Generative dynamics on a semigroup model and identification of the maximal RAF set of chemicals
The generative discrete dynamics of a CRS is introduced and used to determine the maximal RAF set of chemicals.Starting with any set of chemicals Y 0 ⊂ X F , there is a maximal function Φ Y0 (Definition 2.12) that is supported on this set.By acting on the empty set, Φ Y0 (∅) gives all non-food chemicals that can be generated from the food set by using functionality supported only on Y 0 and the food set.The argument can be applied iteratively and gives rise to the following definition.
Definition 4.4.The generative dynamics of a CRS with the initial condition Y 0 ⊂ X F is generated by the propagator where Φ Y is the function of Y ⊂ X F .The dynamics generated by D g is parametrized by Z ≥0 as The generative dynamics has analogous properties to the sustaining dynamics and the reader is referred to Section 4.2. in Loutchko (2022) for a more detailed discussion.Here, only the properties needed for the proof of the main theorem are given.
Remark 4.5.Due to the finiteness of the state space X, the dynamics either leads to a fixed point or to periodic behavior.If the initial condition Y 0 leads to a fixed point, the dynamics is said to stabilize and the fixed point is denoted by Y * g 0 .
Proposition 4.6.Let the dynamics be given by ( holds for all n ∈ Z ≥0 and the dynamics stabilizes.The analogous statement holds for the case that Y 1 ⊃ Y 0 . Proof The proof proceeds by induction.By hypothesis Y 1 ⊂ Y 0 is satisfied.Let Yn ⊂ Y n−1 .This implies the ordering of the respective functions Φ Yn ≤ Φ Yn−1 by the property (S8).Together with the property (S1) this gives The dynamics is thus given by the decreasing chain of subsets Y 0 ⊃ Y 1 ⊃ ... ⊃ Yn ⊃ Y n+1 . . .and, because X F is finite, the chain stabilizes.The case Y 1 ⊃ Y 0 is treated analogously.
Lemma 4.7.Let X F ⊂ X F be a RAF set of chemicals let Y be a set that satisfies X F ⊂ Y ⊂ X F .Then the inclusion holds.
Proof The chain of inclusions follows from the Corollary 4.2 and the property (S8).
Now the main theorem is stated and proven: Theorem 4.8 (on the maximal RAF set of chemicals) The maximal RAF set of chemicals of a CRS is the fixed point of the generative dynamics (Yn) n∈Z ≥0 with the initial condition Y 0 = X F , i.e. it is the set X * g F .
Proof It follows from Proposition 4.6 that the dynamics has a fixed point X * g F .By Proposition 4.3 this fixed point is a RAF set of chemicals.It remains to show the maximality of X * g F : For any RAF set of chemicals X F ⊂ X F , the repeated application of Lemma 4.7 implies that X F ⊂ Yn for all n ∈ Z ≥0 and therefore X F ⊂ X * g F .
Corollary 4.9.A CRS with a nilpotent semigroup (S, •) has no nontrivial RAF sets of chemicals.
Proof Let X F ⊂ X F be a nontrivial RAF set of chemicals.Then X F ⊂ Φ X F (∅) holds by Corollary 4.2 and thus the condition (S1) implies that for any power of Φ X F , i.e.Φ n X F is nonzero for any n ∈ N.
Nilpotent semigroups comprise the largest class of semigroups as any magma3 with the product of any three elements equal to zero is automatically a semigroup, cf.Satoh et al (1994); Almeida (1995).The above corollary weeds out all nilpotent semigroups as candidates for semigroup models of selfgenerating CRS and states that such models are located in a more interesting class of semigroups.
Remark 4.10 (Connection to the RAF algorithm).Hordijk and Steel (2004) have presented an algorithm to find the maximal RAF set of reactions.It consists of a dynamics on the power set of reactions P(R) generated by R → δ(γ(R )) with the initial condition R 0 = R.The following two operations are performed iteratively: (R1) For a set R ⊂ R, remove all reactions from R that have no catalyst in supp(R ) until no further reductions can be made.This yields the set γ(R ).(R2) For a set R ⊂ R, until no further reductions can be made, remove all reactions r from R that satisfy dom(r) ⊂ Φ R (∅) ∪ F , where Φ R is the maximal function of the semigroup model S R (R ) := φ r r∈R .This yields the set δ(R ).
Note that (R2) has been rephrased here to suit the language of semigroup models.This is similar in spirit to the algorithm given in Theorem 4.8 by the generative dynamics Y → Φ Y (∅), where the sets of chemicals Y should be thought of as the support of R featured in the RAF algorithm.By forming the function Φ Y , all reactions without a catalyst in Y = supp(R ) are excluded, which corresponds to (R1).The application of the function Φ Y to the empty set corresponds to the exclusion of all reactions without support in Φ Y (∅), i.e. to the step (R2).
Functionally closed RAF sets of chemicals
In addition to the knowledge of the maximal RAF set of chemicals, the hierarchy of RAF subsets of chemicals plays an important role in the understanding of a CRS.Of particular importance are the RAF sets of chemicals which satisfy the fixed point equation for the dynamics and are termed functionally closed RAF sets of chemicals in this section.This is closely related to the notion of functionally closed sets of self-sustaining chemicals, which is developed in Loutchko (2022), Section 4.4.
If, for a RAF set of chemicals X F ⊂ X F , the inclusion X F ⊂ Φ X F (∅) is strict, then the set is not stable in the sense that it will produce additional chemicals over time.First, the chemicals in Y 1 = Φ X F (∅) will be generated from the food set, followed by chemicals in Y 2 = Φ Y1 (∅), etc.By Proposition 4.6, this dynamics stabilizes at the fixed point X * g F , which contains the original RAF set of chemicals X F .Moreover, being a fixed point of the dynamics, X * g and is thus a RAF set of chemicals by Proposition 4.6.The set X * g F is not able to further catalyze the generation of chemicals outside of X * g F from the food set and is thus functionally closed.This motivates the following definition.
Definition 4.11.The functional closure of a RAF set of chemicals X F ⊂ X F is the fixed point X * g F of the generative dynamics.If X F satisfies the fixed point equation Φ X F (∅) = X F , then it is said to be a functionally closed RAF set of chemicals.
Alternatively, the closure of a RAF set of chemicals can be characterized as follows: Lemma 4.12.The functional closure X * g F of a RAF set of chemicals X F is the unique minimal functionally closed RAF set of chemicals which contains X F .
Proof Let Y be a minimal functionally closed RAF set which contains X F and let (Yn) n∈Z ≥0 be the generative dynamics with the initial condition Y 0 = X F and fixed point X * g F .Then Yn ⊂ Y holds for all n ∈ Z ≥0 , which can be verified by induction: For n = 0, the claim holds by assumption and the inductive step is verified by which follows from the property (S8).This implies that X * g F ⊂ Y and by the minimality of Y , the equality X * g F = Y must hold.
Remark 4.13.Note that the characterization of the closure of a RAF set of chemicals given by Lemma 4.12 does not extend to arbitrary sets, i.e. in general there does not exist a unique minimal functionally closed set of chemicals which contains Y for a arbitrary set of chemicals Y ⊂ X * g F .Fig. 6 provides an illustration.The shown CRS is RAF and it has the functionally closed sets of chemicals given by X F = {c, d, e}, X F = {c, d} and X F = {d, e}.For the set {d}, there exists no unique minimal functionally closed set of chemicals which contains it.The lattice of all functionally closed sets of chemicals can be obtained by the following construction.Define the reduced generative dynamics by the propagator Due to the finiteness of X, there is an N ∈ N such that N i+1 = {∅} for all i > N .The following theorem gives a description of the lattice of functionally closed RAF sets of chemicals of a CRS, which extends the characterization of the maximal RAF set of chemicals provided in Theorem 4.8.Proof By construction, all elements of N are functionally closed RAF sets of chemicals.It remains to show that all functionally closed RAF sets of chemicals are indeed contained in N. In this regard, recall that N(Y ) contains all maximal closed RAF sets of chemicals which are strictly contained in Y .For a functionally closed RAF set of chemicals X F , there exists a chain of maximal length of functionally closed RAF sets of chemicals This finishes the application of the semigroup models and their representations by decorated rooted tree to self-generating CRS.The possible implications of the results of this section are now discussed.
Discussion
A general discussion of the semigroup models of CRS is given by Loutchko (2022), where, for example, algebraic properties and the possibility to analyze the computational properties of CRS with their semigroup models are expounded upon.
In this article, it was demonstrated how the language of semigroup models provides a natural framework to treat CRS with the RAF property, to determine the maximal RAF set of chemicals and to determine the lattice of functionally closed RAF sets of chemicals.The technical basis is provided by the representation of the elements of the semigroup models as decorated rooted trees, because this representation is particularly useful in making the relation of semigroup elements with the F property precise.It will be interesting to investigate whether such representations can be used more generally in the theoretical study of (not necessarily finite) semigroups and semirings.Similar representations have turned out to be useful in the theory of self-similar groups introduced by Nekrashevych (2005).
With regard to CRS theory, the notion of functionally closed sets of RAF chemicals is a very natural concept within the theory of semigroup models.One is naturally led to consider the fixed points of the dynamics, which are RAF sets of chemicals by Proposition 4.6.Moreover, Lemma 4.12 ensures that each RAF set of chemicals has a uniquely determined functional closure with nice properties.The analysis of the lattice of functionally closed RAF sets of chemicals of a CRS within a living organism can potentially provide insights into the modular organization of its metabolism and the respective control mechanisms.The fact that arbitrary subsets of X F -in contrast to RAF sets of chemicals -do not have a unique minimal functionally closed RAF set of chemicals which contains them, inspires further investigation of CRS of real biological systems.If a chemical (or a set of chemicals) has a unique minimal functionally closed RAF set of chemicals to which it belongs, then one can conjecture that this chemical is specific for the respective functional module.And it is likely that this chemical was acquired together with the respective module in the course of evolution.If, however, this is not the case -such as for the chemical d the example shown in Fig. 6, then the respective chemical serves as a kind of mediator between the functional modules in which it is contained.It will be interesting to test such hypotheses on CRS of biological systems and to develop new ones by applying the techniques provided by the semigroup formalism.
Another possibility suggested by the algebraic models of CRS is the coarse-graining obtained by taking quotients of the semigroups which are well-behaved with respect to the algebraic operations.The technical difficulty is thereby to relate the quotients of functions, which live in T (X) to quotients of the state space X in a natural manner.This work is currently being finalized.This more algebraic approach provides an alternative way to reveal and analyze the modularity of a given CRS.Whereas the lattice of functionally closed RAF sets of chemicals rely on the self-generating property, the quotient structures do not.Therefore, in future, it will be interesting to compare the approach presented in Section 4.3 of this article to the algebraic coarse-graining procedures.
Fig. 1
Fig.1An illustration of the algebra of decorated rooted trees.A The edge labels a, b, c, d, e ∈ A determine the vertex labels of the two trees T 1 and T 2 recursively: the leaves are labelled by the multiplicatively neutral element and each vertex function is given by the summation of the labels over all outgoing edges, multiplied with the labels of the vertices at their heads.All edge and the resulting vertex labels are shown here, whereas in B,C and D only the labels of the edges and of the root are shown.B Addition of the two trees T 1 and T 2 : The roots of both trees are identified and the labels on all edges of both trees are retained.The vertex labels are determined as in A. The root label of T 1 + T 2 is equal to the sum of the root labels of T 1 and T 2 .C Multiplication of two trees T 1 and T 2 : Each leaf of T 1 is replaced with a copy of T 2 .Thereby, the edge labels from the original trees are retained, which yields the respective vertex labels.If the right distributivity of the operations + and • holds, then the root label of T 1 • T 2 is equal to the concatenation of the root labels of T 1 and T 2 .D The replacement of an edge with label b = b 1 + b 2 by two edges with labels b 1 and b 2 .A copy of the child tree of the original edge is attached to each of the new edges.If the right distributivity of the operations + and • holds, then the root labels of both shown trees are equal.
F
over time.Therefore X * g F is termed the functional closure of X F .A characterization of the lattice of all functionally closed RAF sets of a CRS is provided in Theorem 4.14.It is based on the reduced generative dynamics given by Y → Y ∩ Φ Y (∅).This dynamics always has a fixed point, denoted by Y * rg 0 for the inital condition Y 0 .For each set Y ⊂ X F , the set of fixed points N(Y ) := {(Y \ {y}) * rg for y ∈ Y } is introduced and one recursively defines
Fig. 2
Fig. 2 Example of a graphical representation of a CRS.The CRS consists of five chemicals X = {a, b, c, d, e} and four reactions a + b → c, c + b → d, b + d → e and c + d → e, which are catalyzed by d, a, e and d, respectively.The food set is given by F = {a, b}.
Fig. 4 A
Fig.4A Addition of two trees T 1 and T 2 : The roots of both trees are identified and the labels on all edges of both trees are retained.All vertex labels are given by the relation (3.1).B Multiplication of two trees T 1 and T 2 : Each leaf of T 1 is replaced with a copy of T 2 .Thereby the edge labels from the original trees are retained, yielding the vertex labels by relation (3.1).Only the root labels are shown in the figure (To arrive at the form of the root label of T 1 • T 2 given here, the right-distributivity, cf.propery (S6), is used).
Fig. 6
Fig.6This CRS has the three functionally closed sets of chemicals given by X F = {c, d, e}, X F = {c, d} and X F = {d, e, }.There is no unique minimal functionally closed set of chemicals which contains the set {d}.
.
This dynamics always stabilizes, and the fixed point for the initial condition Y 0 is denoted as Y * rg 0 The fixed point equation of this dynamics reads Y = Y ∩ Φ Y (∅), which is equivalent to the fixed point equation Y = Φ Y (∅) of the generative dynamics.For a set Y ⊂ X F , define the set N(Y ) ⊂ X as N(Y ) := {(Y \ {y}) * rg for y ∈ Y } .All of the sets contained in N(Y ) are functionally closed RAF sets of chemicals by Proposition 4.3.Moreover, let X F ⊂ Y be a functionally closed RAF set of chemicals which is strictly contained in Y and is maximal with this property.Then there is a chemical y ∈ Y \ X F and one verifies that X F = (Y \ {y}) * rg .Now inductively define the following sets N 0 := {X * g F } N i+1 := Y ∈N i N(Y ) for all i ∈ Z ≥0 .
the set of all functionally closed RAF sets of chemicals of the CRS. | 14,756 | 2022-07-12T00:00:00.000 | [
"Mathematics",
"Chemistry"
] |
Effects of Sr/F-Bioactive Glass Nanoparticles and Calcium Phosphate on Monomer Conversion, Biaxial Flexural Strength, Surface Microhardness, Mass/Volume Changes, and Color Stability of Dual-Cured Dental Composites for Core Build-Up Materials
This study prepared composites for core build-up containing Sr/F bioactive glass nanoparticles (Sr/F-BGNPs) and monocalcium phosphate monohydrate (MCPM) to prevent dental caries. The effect of the additives on the physical/mechanical properties of the materials was examined. Dual-cured resin composites were prepared using dimethacrylate monomers with added Sr/F-BGNPs (5 or 10 wt%) and MCPM (3 or 6 wt%). The additives reduced the light-activated monomer conversion by ~10%, but their effect on the conversion upon self-curing was negligible. The conversions of light-curing or self-curing polymerization of the experimental materials were greater than that of the commercial material. The additives reduced biaxial flexural strength (191 to 155 MPa), modulus (4.4 to 3.3), and surface microhardness (53 to 45 VHN). These values were comparable to that of the commercial material or within the acceptable range of the standard. The changes in the experimental composites’ mass and volume (~1%) were similar to that of the commercial comparison. The color change of the commercial material (1.0) was lower than that of the experimental composites (1.5–5.8). The addition of Sr/F-BGNPs and MCPM negatively affected the physical/mechanical properties of the composites, but the results were satisfactory except for color stability.
Introduction
Dental composites are usually placed as dentine substitute material (core build-up restoration) for severely damaged teeth that require complex direct or indirect restorations [1]. The use of composites that can be polymerized via both light-activated and chemical-activated polymerization (dual-cured) is beneficial to ensure sufficient polymerization of the materials in a deep cavity [2,3]. The placement of definitive restorations for teeth temporized with core build-up materials may be delayed in some circumstances, such as ongoing orthodontic treatments or patients with limited budget/cooperation. The main concern of an extensive cavity restored with resin composite is the risk of bacterial microleakage and demineralization at tooth-composite margins [4,5]. This may subsequently cause secondary caries and severe infections that require complicated treatments or extraction. The main limitations of the commonly used dual-cured resin composites for Nanomaterials 2022, 12, 1897 2 of 20 core build-up included the lack of ion-releasing and antibacterial action to inhibit biofilm formation and tooth-demineralization [6].
Several types of calcium phosphates (CaP) were used as the reactive fillers to encourage ion release for promoting anti-caries actions. It was demonstrated that incorporating CaP with a low Ca/P ratio, such as monocalcium phosphate monohydrate (MCPM, promoted ion release and mineralizing actions for resin composites [7,8]. However, the addition of MCPM may significantly decrease the physical/mechanical properties of the materials. This may consequently compromise the strength and longevity of the restorations. The previous study reported that the incorporation of MCPM reduced the biaxial flexural strength of the experimental resin composites by approximately 20% [9,10]. Bioactive glass nanoparticles represent an alternative ion-releasing filler [11]. Several studies have incorporated sol-gel-derived bioactive glass nanoparticles into resin composites to enable ion releasing or antibacterial actions for the materials [12][13][14][15]. Additionally, it was reported that the addition of bioactive glass nanoparticles exhibited no significant effects on the mechanical strength of the composites [16][17][18]. Moreover, bioactive glass produced in a spherical shape exhibited a more remarkable mineralizing ability than that of the glass produced as irregular/granular particles [19]. It was demonstrated that the use of nanosized glass with a spherical shape (diameter~200 nm) provided color structural effects to enhance the optical properties of the materials [20]. The bioactive glass contained multiple ions in its glass network. The addition of bioactive glass containing fluoride may promote the precipitation of acid-resistant fluorohydroxyapatite for the tooth structure [21]. The combination of strontium with fluoride ions also demonstrated synergistic effects on anti-caries actions [22,23]. Furthermore, bioactive glass containing Sr exhibited greater antibacterial actions than non-Sr-doped bioactive glass [24]. It was also demonstrated that Sr 2+ might increase nucleation clusters essential for apatite precipitation [25,26].
It was expected that a combination of MCPM and bioactive glass nanoparticles in composites might allow ion release without detrimental effects on the strength of composites [9,10]. However, the mismatch of refractive indexes between reactive fillers with resin matrix may affect the polymerization of composites due to increased light-scattering [8]. The decrease in monomer conversion could affect polymer rigidity and promote water sorption leading to polymer expansion [27]. This may subsequently lead to the reduction of strength [28], poor color stability [29], hydrolytic degradation [30], and monomer elution [31]. Furthermore, the reactive fillers were not silanized, which may affect the stress transfer within the resin matrix and increase the risk of filler degradation, thus decreasing the mechanical strength of the materials [32]. The main concern for resin-based materials is the excessive shrinkage stress generated during polymerization. Additionally, the continuation of polymerization of dual-cured resin composites by chemical-activated polymerization resulted in post-gel shrinkage in the materials [31,33]. It was reported that hygroscopic expansion due to water sorption could compensate for the polymerization shrinkage and reduce the shrinkage stress of the composites [34,35]. The addition of hydrophilic fillers may enhance water sorption to expand the polymer network. However, excessive volume expansion may increase the risk of hydrolytic degradation [36], tooth expansion, or cracks [37,38].
The objective of this study was to prepare dual-cured resin composites for core buildup restoration containing MCPM and Sr/F-bioactive glass nanoparticles (Sr/F-BGNPs), followed by the assessment of their physical/mechanical properties. The effects of the additives on the degree of monomer conversion upon light/self-curing mechanisms, biaxial flexural strength, Vickers surface microhardness, mass/volume changes, and color stability were determined. The effect of raising the concentration of MCPM and Sr/F-bioactive on the tested physical/mechanical properties was also examined. The research hypothesis was that the addition of MCPM and Sr/F-BGNPs should not affect the physical/mechanical properties of the materials.
Preparation of Sr/F-Bioactive Glass Nanoparticles (Sr/F-BGNPs)
Sr/F bioactive glass nanoparticles (Sr/F-BGNPs) were synthesized using the sol-gel process under the basic condition based on the previous studies [39][40][41]. Firstly, 0.32 M of ammonium hydroxide, 6 M of Milli Q water, 0.035 M sodium fluoride (Sigma Aldrich, St. Louis, MO, USA), and 14 M of ethanol (Sigma Aldrich, St. Louis, MO, USA) were mixed and stirred at 500 rpm for 30 min in a 500 mL of Erlenmeyer Flask. Then, the solution was gradually added with tetraethyl orthosilicate (0.28 M) (TEOS, Sigma Aldrich, St. Louis, MO, USA). Next, the mixed solution was stirred for 8 h. Particles were then collected by centrifugation. They were then incorporated with calcium nitrate tetrahydrate (0.14 M) (Sigma Aldrich, St. Louis, MO, USA) and strontium nitrate (0.42 M) (Sigma Aldrich, St. Louis, MO, USA). The synthesized particles were calcined at 680 • C for 3 h with the rate of heating at 3 • C/min to allow the incorporation of calcium (Ca), strontium (Sr), sodium (Na), and fluoride (F). Then, the particles were thoroughly cleaned with ethanol.
Preparation of Experimental Dual-Cured Resin Composites for Core Build-Up Restoration
Dual-cured composites were prepared using the methods reported in the previous studies [9,42]. The liquid phase of composites (Table 1) The powder phase contained microparticle borosilicate glass (diameter of 0.7 and 7 µm, Esstech, Essington, PA, USA), nanoparticle borosilicate glass (diameter of 180 nm, SCHOTT UK Ltd., Wolverhampton, UK), monocalcium phosphate monohydrate (MCPM, C/P ratio of~0.5 [43], Himed, Bethpage, NY, USA) and F/Sr-BGNPs. Five formulations of the experimental materials with high and low levels of MCPM (3 or 6 wt%) and F/Sr-BGNPs (5 or 10 wt%) were prepared ( Table 2). A scanning electron microscope (SEM, JSM 7800F, JEOL LTd., Tokyo, Japan) was used to examine the morphology and size of the Sr/F-BGNPs and MCPM. Additionally, a dispersive X-ray spectrometer (EDX, X-Max 20, Oxford Instruments, Abingdon, UK) was employed to assess the elemental composition of the fillers with the beam voltage set at 5 kV.
The initiator and activator liquids were hand-mixed with the powder phase using a powder to liquid mass ratio of 3:1. Then, the mixed initiator and activator pastes were poured into a black opaque double-barrel syringe (medmix Switzerland AG, Haag, Switzerland). The composite pastes were injected and mixed using a dispenser and mixing tip (medmix Switzerland AG, Haag, Switzerland). The commercially available dual-cure composite resin (MultiCore ® Flow, Shade Light, Ivoclar Vivadent, Schaan, Liechtenstein) was the commercial comparison (Table 3).
Degree of Monomer Conversion (DC)
An FTIR-ATR (Nicolet iS5, Thermo Scientific, Waltham, MA, USA) (n = 5) was used to determine the DC of materials. The composites were injected into the metal circlip (10 × 1 mm, Springmaster Ltd., Redditch, UK) on the ATR diamond. A transparent acetate sheet was then placed over the specimen. They were then light-cured with an LED dental curing light (irradiance~1200 mW/cm 2 , SmartLite Focus Pen Style, DENTSPLY Sirona, York, PA, USA) for 20 s. The tip of the curing light was positioned at~1-2 mm above the composite surface. The FTIR spectra (800-4000 cm −1 ) were then obtained from the bottom surface of the specimen using the resolution of 4 cm -1 and 8 scans. The temperature was controlled at 25 • C. The degree of monomer conversion (D c , %) was obtained using Equation (1) [9].
where ∆A 0 and ∆A t are the absorbance height of the C-O peak (1320 cm -1 ) [44] relative to the background level at 1335 cm -1 before curing and after curing at time t, respectively. For the DC upon chemical activation, the specimens were left on the ATR diamond without light-curing to allow the chemical-activated polymerization for 10 min. The FTIR spectra were recorded for up to 30 min. The final conversion at a late time was calculated using a linear extrapolation of DC versus the inverse time to zero [42].
Biaxial Flexural Strenght (BFS) and Biaxial Flexural Modulus (BFM)
The composites were loaded in a metal ring (1 × 10 mm) to produce disc specimens (n = 5). The composites were covered by acetate sheets and glass slides. They were lightcured for 20 s on both sides. The specimens were left for 24 h at a temperature-controlled at 25 ± 1 • C before removal from the ring. Specimens with visible defects were excluded from the test. The selected specimens were immersed in 10 mL of deionized water and incubated at a temperature-controlled at 37 ± 1 • C for 24 h and 4 weeks. Prior to the test, the specimen thickness was measured in three areas using a digital vernier caliper. The specimens were then positioned in a ball-on-ring testing jig. The test was performed using a universal testing machine (AGSX, Shimadzu, Kyoto, Japan) with a 500 N load cell and a crosshead Nanomaterials 2022, 12, 1897 5 of 20 speed set at 1 mm/min. The biaxial flexural strength (BFS, Pa) and biaxial flexural modulus (BFM, Pa) was then calculated using Equation (2) and Equation (3), respectively.
where F is failure load (N), d represents the thickness of disc specimens (m), r is the radius of circular support of the ball-on-ring jig (m), and v is Poisson's ratio (0.3). Furthermore, ∆H ∆W c represents the rate of change of the load about the central deflection or gradient of force versus the displacement curve (N/m) [15]. Additionally, β c and q represents the center deflection junction (0.5024) and the ratio of the support radius to the specimen radius, respectively. After testing, the fracture area of the representative specimens was analyzed using SEM (JSM 7800F, JEOL LTd., Tokyo, Japan) and EDX (X-Max 20, Oxford Instruments, Abingdon, UK).
Surface Microhardness
Disc specimens (n = 5) were produced according to Section 2.3 (n = 5). They were placed in deionized water (10 mL) (n = 5). The specimens were incubated at 37 • C for 24 h and 4 weeks. A microhardness tester (FM-800, Future-Tech Corp, Kanagawa, Japan) was used to assess the Vickers surface microhardness of the materials [45]. The indentation load was set at 300 g with a dwelling time of 10 s. The Vickers microhardness number (VHN) was then obtained by averaging 5 different areas on the sample.
Mass and Volume Changes
Mass and volume of the specimens were recorded using a 4-figure balance equipped with a density kit (MS-DNY-43, METTLER TOLEDO, Columbus, OH, USA) [46]. Then, the specimens were immersed in deionized water (10 mL) and stored at 37 • C. Mass and volume of the specimens at each time point (At each time point (3, 6, 24, 48, 120, 168, 336, 504, and 672 h) were recorded without changing the storage solution. The changes in mass (∆M, wt%) and volume (∆V, vol%) of the specimens were obtained using the following equations [47].
where M t and V t are the mass and volume of the specimens recorded at time t, respectively. M 0 and V 0 are the mass and volume of the specimens that were measured before placing in water.
Color Stability (∆E 00 )
The measurement was conducted following the protocol reported in the previous study [48]. Briefly, disc specimens were prepared and immersed in deionized water at a controlled temperature of 37 • C for 7 days. An intraoral dental spectrophotometer (Easyshade V; VITA Zahnfabrik, Baden-Württemberg, Germany) was employed to determine the CIELab coordinates of the specimens before and after immersion in deionized water.
The spectrophotometer was calibrated using the standard VITA Classical shade guide (A3, VITA Zahnfabrik, Baden-Württemberg, Germany). The measurement was performed in a box with a controlled illuminance of~1000 lux (LX1330B Light Meter; Dr. Meter Digital Illuminance, StellarNet Inc., Tampa, FL, USA). Then, the tip of the spectrophotometer was positioned at the center of the specimens over the standardized photography neutral 18% gray card (Kodak Gray Cards, Rochester, New York, NY, USA). Then, the color coordinates (CIE L*, a*, b*, C*, and h o ) of the composites were recorded in triplicate. The L*, a*, and b* represent value, red-green, and yellow-blue axes, respectively. C*, and h o were chroma and hue angle. The color differences or color stability (∆E 00 ) of the specimens after immersion in water were calculated using the CIEDE2000 (E 00 ) formula as follows.
where ∆L , ∆C , ∆H represent the differences in lightness, chroma, and hue, respectively. R T represents a rotation function related to the interaction between chroma and hue differences in the blue region. Additionally, S L , S C , and S H are weighting functions, and K L , K C , K H are the correction terms for experimental conditions [49].
Statistical Analysis
The results in the current study were reported as mean (SD) or median (min-max). The data were analyzed using Prism 9 for macOS version 9.3.1 (GraphPad Software, San Diego, CA, USA). The normality of data was checked using the Shapiro-Wilk test. A one-way ANOVA followed by Tukey multiple comparisons was used for normally distributed data. For data with non-normal distribution, Kruskal-Wallis and Dunn's tests were employed. The data were considered statistically significant if p < 0.05. Additionally, the power of the test was calculated using G*Power for MacOS version 3.1.9.6 (Heinrich Heine University Düsseldorf, Düsseldorf, Germany) [50] using the data from the previous study [9]. The result suggested that using 5 samples for each test should provide power > 0.95 at alpha = 0.05 for a one-way ANOVA.
The current study also used factorial design analysis [10] to assess the effect of increasing concentrations of MCPM (from 3 to 6 wt%) and Sr/F-BGNPs (from 5 to 10 wt%) on the tested properties. The formulation of experimental composites in the current study contained 2 main variables at 2 levels of factorial design. Hence, the factorial equation is given as follows. ln where, b 1 and b 2 represent the effect of raising MCPM (from 3 to 6 wt%) and Sr/F-BGNPs (from 5 to 10 wt%) concentrations on the tested property value (P) of the experimental materials, b 1,2 represents an interaction effect from the increasing concentration of MCPM and Sr/F-BGNP, the bracket indicated the average value of lnP. Additionally, the percentage effect of variables (Q, %) can be obtained using Equation (8).
where, G H and G 0 represent the geometric average properties for the two formulations containing high-level additives versus the other two formulations with low-level additives, respectively. The b values from Equation (6) are denoted by b i . The effect of the rising concentration of MCPM was therefore obtained by comparing the geometric average of S10M6 and S5M6 with that of S10M3 and S5M3. Additionally, the effect from Sr/F-BGNPs was determined by the geometric average of S10M6 and S10M3 over S5M6 and S5M3. The interaction effects are then calculated from the geometric average of S10M6 and S5M3 over that of S10M3 and S5M6. The main effects and interaction effects were considered significant if the b values were greater than its 95% confidence interval (95% CI).
SEM-EDX of Sr/F-BGNPs and MCPM
The spherical monodispersed Sr/F-BGNPs were successfully synthesized using the sol-gel process (Figure 1). SEM images suggested that the particle size of Sr/F-BGNPs was
Color Stability
The highest and lowest observed color difference (∆E 00 ) were obtained from S10M6 (5.8 ± 2.6) and MF (1.0 ± 0.6), respectively ( Figure 11). The color difference of MF was significantly lower than that of S10M6 (p = 0.0281) and S0M0 (4.3 ± 1.2) (p = 0.0082). S10M6 also exhibited the significant lower ∆E 00 than S5M6 (2.6 ± 1.0) (p = 0.0108) and S10M3 (1.4 ± 0.9) (p = 0.0004). Additionally, S10M3 exhibited a significantly lower ∆E 00 than S0M0 (p = 0.0281). The results from factorial analysis suggested that the increase in MCPM level increased ∆E 00 by 99 ± 82%, while the effect from rising Sr/F-BGNPs was minimal. Figure 11. Color difference (∆E 00 ) of all material after immersion in water for 1 week. The boxes represent the first quartile (Q1) to the third quartile (Q3), the horizontal line in the boxes represents the median, the whiskers represent the maximum and minimum values, and "+" represents the mean value (n = 5). Stars indicated p < 0.05.
Discussion
It was reported that bioactive glass nanoparticles and calcium phosphates promoted ion release that could potentially enhance remineralizing actions and prevent dental caries [8,10,14,15]. However, the additives were hydrophilic and non-silanized, which may inevitably reduce the physical/mechanical properties of the composites. The aim of this study was to prepare dual-cured resin composites for core build-up materials added with reactive fillers (MCPM, Sr/F-BGNPs). The effect of MCPM and Sr/F-BGNPs on the tested properties was determined. The results indicated that the addition of MCPM and Sr/F-BGNPs affected light-activated polymerization, biaxial flexural strength, surface microhardness, and color stability of the materials. Hence, the research hypothesis was rejected.
Degree of Monomer Conversion (DC)
In general, composites with high DC may exhibit a low risk of releasing the unreacted toxic monomers from the materials [53]. The concerns of released monomers included cytotoxic, teratogenic, estrogenic, mutagenic, genotoxic effects, and allergic reactions depending upon the released substances [54]. Additionally, it was demonstrated that monomers released from resin composites might interfere with biofilm metabolism, leading to biofilm dysbiosis or an increase in cariogenicity [31]. This may subsequently enhance the risk of caries formation around the composites.
The results showed that the additives slightly reduced the DC of the experimental materials due probably to the mismatch between the refractive index (RI) of fillers and monomers. The RI of fillers was expected to fall within 1.4-1.5 [55] to match with the RI of the monomer mixture such as UDMA/TEGDMA (RI~1.45) [10]. Although the actual refractive indices of MCPM and Sr/F-BGNPs are not known, the result indicated that the effect of the additives on the reduction of DC upon light activation was minimal (~2%) compared to that reported in the previous studies (~4-10%) [8,10]. This could be due to the lower concentration of the additives used in the current study compared to the previous studies. The ability of a material to polymerize upon chemical activation is also essential for core build-up materials to promote adequate polymerization in the deep cavity where light transmission is limited [56]. The results showed that all experimental materials exhibited similar DC upon self-curing regardless of the level of additives. This may be because the chemical-activated polymerization was mainly governed by the type of monomers and level of initiator/activator contained in the materials [57]. The DC upon chemical activation of all materials except for S0M0 was higher than that upon light-curing DC bỹ 8%. A possible reason could be that the self-curing conversion of the materials was not affected by light penetration.
The experimental materials showed a more rapid increase in DC with time and higher DC values than MF. The actual composition of MF was unknown, so a direct comparison of the different components may not be entirely possible. However, we speculated that the higher DC of experimental materials could be due to UDMA as the primary bulk monomer in the experimental material, while the bulk monomer of MF is bis-GMA. It was reported that UDMA monomers tend to exhibit a higher rate of polymerization than bis-GMA monomers [58][59][60], which may be due to the greater flexible structure of UDMA compared to bis-GMA. The monomer mixture containing low glass transition temperature (T g ) (UDMA, T g = −35 • C) usually enables higher DC compared with the high T g monomer (Bis-GMA, T g = −8 • C) [61]. Additionally, the polymerization of UDMA can be enhanced via the chain transfer reactions of -NH-groups in the molecule [62].
The DC of experimental composites upon light-or chemical-activated polymerization was similar to that of the commercial core build-up materials (50-70%) [3]. The minimum requirement for DC of resin composites in the ISO (International Organization for Standardization) standard is not yet established. However, it is suggested that a conversion greater than 50% for dimethacrylate composites may be sufficient to bind all monomers within the resin matrix [28]. This may help prevent toxic monomer release. The monomer releasing study should be tested in future work.
Biaxial Flexural Strength (BFS) and Modulus (BFM)
Resin composites for core build-up material should provide sufficient mechanical properties to withstand the occlusal loads during functions. The strength of materials was primarily governed by filler characteristics, such as filler load, filler type, size and geometry, and salinization [63]. The previous study employed large particle MCPM (diameter of5 0 µm) with high concentration (10-20 wt%) as ion-releasing fillers to enable the desirable mineralizing effect for the composite [10]. However, the main limitation is the significant reduction in the strength of the materials. The addition of reactive fillers in the current study slightly reduced the strength of the materials. This could be due to smaller particles (200 nm and 10 µm) of reactive filler, which may enhance mechanical strength [64]. Additionally, the use of hybrid inorganic fillers consisting of different particle sizes (0.7 µm, 7 µm, and 180 nm) may help maximize filler packing, which could potentially enhance the mechanical strength of the experimental materials [65].
The reactive fillers reduced the strength of the composites. This could be due to the lack of silanization of MCPM and Sr/F-BGNPs, causing poor interaction between the fillers and resin-matrix. This may negatively affect stress transfer between matrix and fillers, thus reducing the strength of the materials [66]. The reactive fillers were not silane treated due to concern regarding the reduction in ion release of the materials. Additionally, the composite paste was prepared by hand mixing. Hence, it is possible that the small fillers, such as Sr/F-BGNPs, may be agglomerated [67]. This may lead to poor dispersion of fillers in the matrix and affect the strength of composites. The fracture surface revealed the MCPM particles in the bulk of materials. It was expected that the MCPM particles should be mostly dissolved after four weeks due to their low Ca/P ratio (~0.5). The detection of multiple MCPM particles in the composite could be due to the polymer network's high cross-linking, reducing the water diffusion into the materials. The slow dissolution of MCPM may reduce the adverse effects on materials' physical/mechanical properties, but this may limit the ion release and remineralizing actions of the materials. Additionally, the detection of precipitates containing Sr or F may suggest that the released ions may react and precipitate as plate-like crystals [68,69] inside the bulk of the materials. The formation of new precipitate may help fill voids or defects inside the materials. This may help reduce crack propagation and maintain the mechanical strength of the materials [42].
The flexural strength recorded from the commercial material in the current study was higher than that reported in the previous studies [70,71]. The specimens in the current study were allowed for post-polymerization after light curing for 24 h prior to the immersion for another 24 h. The previous study showed that the DC of dual-cured resin composites activated by light-curing or self-curing modes was increased by~10% at 24 h [31]. The continuation of polymerization may potentially enhance the cross-linking and mechanical strength of the material [33,62]. The flexural strength of the experimental materials at 24 h or 4 weeks passed the requirement of the standard (>80 MPa) (BS EN ISO 4049:2019 Dentistry-Polymer-based restorative materials) [72]. Although the flexural strength test in the current study was the biaxial flexural strength (BFS) test, the results from the BFS test were expected to be comparable with lower variation than that of the 3-point bending test indicated by the ISO [73]. The main limitation in the current study was that the specimens were not subjected to thermocycling, which could help assess the long-term mechanical performance of the materials.
Surface Microhardness
The high surface microhardness of composites may improve the wear resistance of the materials. The surface microhardness of resin composites usually correlated well with the polymerization and cross-linking of the materials [74]. This was in agreement with the result of the current study. The high surface microhardness value obtained from S0M0 could be due to the higher level of DC upon light-curing or the higher proportion of rigid inorganic glass fillers of the material compared to other experimental formulations. The surface microhardness values obtained from the materials in the current study were similar to that of commercial materials reported in a published study [75]. The increase in surface hardness at four weeks may be due to the post-cure polymerization that could promote the cross-linking of the polymer network [75]. The addition of hydrophilic fillers in the experimental materials may encourage water plasticization of the resin matrix. Hence, the increase of hardness in experimental materials containing MCPM and Sr/F-BGNPs was not significant. The surface microhardness was measured from the top surface where the materials were directly exposed to light-curing. The future work may additionally assess the hardness value at the bottom surface at a greater depth at different time points. This would provide more information regarding the light and self-cure polymerization efficacy of the materials.
Mass and Volume Changes
The mass and volume of the materials are associated with complex behaviors, which are controlled by the equilibrium between component dissolution and water absorption of materials [76]. It was proposed that the changes in mass and volume of composites containing calcium phosphates due to water sorption were linearly increased with the square root of an hour for up to one week, indicating a diffusion control mechanism [28]. This was similar to the results observed in the current study. The level of mass and volume increase (~1 wt% and 1 vol%) of the composite were also lower than that reported in the previously developed composites (~5 wt% and 5 vol%) [8,10]. It was demonstrated that the volume of composites containing 10 wt% of calcium phosphate (MCPM and tricalcium phosphate) was increased by~3% upon water sorption [28]. Hence, it was expected that the composites containing 3-6 wt% of MCPM should exhibit changes of mass or volume in the range of~1%. This was in accordance with the result of the current study.
The mass and volume changes observed amongst experimental formulations were comparable. A possible reason could be the use of a low level of MCPM (2 g/100 mL). Another possible reason could be the high-level DC of the materials, which may limit water diffusion into the materials. Hence, the volume of experimental materials was slightly changed. Ideally, the expansion should match the polymerization shrinkage of composites to relieve the shrinkage stress. However, the immediate shrinkage upon polymerization of the composites was known. Future work should employ the bonded-disk method [77] to determine shrinkage immediately after curing the composites and compare it with the volume expansion of the materials.
Color Stability
The color changes of resin-based restorative materials were controlled by various factors such as filler size, aging, water sorption, voids or defects, and level of cross-linking of the polymer network [78,79]. The stability of color of core build-up materials is desirable to ensure predictable esthetic outcomes. It was reported that the color difference (∆E 00 ), which is visually identifiable (PT, perceptibility threshold) by the viewer, was greater than 0.8. However, acceptable changes (AT, acceptability threshold) were observed when the color difference (∆E 00 ) was less than 1.8 [80]. Hence, the materials should exhibit a color difference (∆E 00 ) value within 0. 8-1.8. It was demonstrated that the exposure of large glass fillers on the surface of composites might increase irregularities or roughness on the surface upon aging [52]. This could promote pigment deposition and negatively affect the material's optical properties and color changes [79,81]. This may be the possible reason for the high ∆E 00 observed with experimental materials. The results suggested that S10M3 and MF exhibited color changes within the acceptable range. The lower filler content of glass fillers (~55 wt%) in MF [82] compared with the experimental materials may help reduce surface irregularities, which could influence the color and optical properties of the material. For S10M3, the high propor-tion of spherical-shaped Sr/F-nanoparticles (200 nm) may help reduce surface irregularity and improve optical properties [20], which could reduce the color difference [83]. The use of MCPM at a low level may also reduce water sorption, affecting the color stability of the materials [84].
It should be mentioned that the current study was an in vitro study. Hence, the clinical relevance should be carefully interpreted. In general, the experimental dual-cured resin composites for core build-up materials with the additives (MCPM and Sr/F-BGNPs) at the designated concentrations showed acceptable physical/mechanical properties. Additionally, only S10M3 exhibited satisfactory color stability, which may be considered a suitable candidate formulation for future studies. Further remineralizing or antibacterial tests are needed to optimize the candidate formulations that exhibit anti-caries actions.
Conclusions
The experimental dual-cured resin composites for core build-up material containing Sr/F-BGNPs and MCPM provided a higher degree of monomer conversion than the commercial material. The additives reduced the biaxial flexural strength, surface microhardness, and color stability of the experimental composite. However, the values remained within the satisfactory range. The increase in the concentration of additives exhibited no detrimental effects on the physical and mechanical properties of the experimental materials. However, the materials' anti-caries effects need to be examined in future studies. | 7,232.4 | 2022-06-01T00:00:00.000 | [
"Materials Science"
] |
Multimodality Imaging Approaches in Alzheimer's disease. Part II: 1H MR spectroscopy, FDG PET and Amyloid PET
In this Part II review, as a complement to the Part I published in this supplement, the authors cover the imaging techniques that evaluates the Alzheimer's disease according to the different metabolic and molecular profiles. In this section MR spectroscopy, FDG-PET and amyloid PET are deeply discussed.
INTRODUCTION
M ore than 5.0 million Americans are currently afflicted by AD. AD affects 5 million people aged more than 65 years and 200,000 individual aged less than 65 years who has younger-onset of AD. 1 Clinical diagnosis of AD by neuropsychological tests has low reliability, limited sensitivity, and narrow specificity. These tests are most accurate in only the advanced stages of the disease. Advanced neuroimaging modalities pose a challenge for traditional AD diagnosis and monitoring.
Besides neuronal loss, the other hallmark histological changes in AD are the accumulation of abnormal amyloid-β (Aβ) proteins forming the plaques (AP) and neurofibrillary tangles (NFTs).
An ideal neuroimaging marker should be able to accurately detect early neurodegenerative pathology, reflect pathological stages across the entire severity spectrum, predict when an individual with early pathology will become demented, and monitor the effect of a therapeutic intervention on the neurodegenerative pathology. 3 In this part of the review, the roles and limitations of the biomarkers used in PET and 1H (hydrogen) MR spectroscopy for management of AD are discussed.
detect different metabolic substrates such as N-Acetylaspartate (NAA), creatine and phosphocreatine (Cr) and choline (Cho). Additional metabolites that can be measured with more complex technique are myoinositol (ml), glutamate and glutamine complex (Glx) and lactate (Lac). 1 The most consistent finding of MRS measurements reported for AD is decreased NAA in many brain regions, which may indicate neuronal loss or mitochondria dysfunction. Subjects with AD has shown reduced NAA in the hippocampus, 2,3 posterior cingulated, 4,5 temporal lobe, 6,7 mesial temporal lobe, 8 occipital lobe, 6,9,10 parietal lobe, 6,11,12 and frontal lobe. 13 Decrease in NAA of white matter (WM) is observed to be smaller than grey matter (GM) but some authors reported no WM change in NAA. 8 Other concordant result is increase in mI concentration at several brain locations, which links to gliosis or membrane abnormalities ( Figure 1).
The areas involving with increased mI include mesial temporal lobe, 14 anterior and posterior cingulated, 5,15 parietal lobe, 11 occipital 10 and white matter. 11 The resonance peak of mI consists of multiple peaks or so called multiplet structures that yield a complex and closely spaced group of resonance lines at clinical field strengths. This broad spectrum pattern is not measured accurately using only single peak of model, which may account for variability in earlier reports. Even recently, despite improvements in automated processing software, clinical group have reported difficulties in obtaining consistent analysis of the mI peak. 1 Some investigators used ratios between MRS-visible metabolites for distinguish AD from normal subjects. Kantarci et al. 16 found higher myoinositol/creatinine ratio in the posterior cingulate in AD compared to controls (p<0.001), in AD compared to MCI (p=0.048) and in MCI versus controls (p=0.006). The ratio of NAA /mI was also inferior in AD patients compared to controls (p<0.001), in AD compared to MCI (p=0.002), and in MCI compared to controls (p=0.008). NAA/mI at posterior cingulate provided the highest sensitivity for distinguish AD and control of 82% at the fixed specificity of 80%. Other studies by Wang et al. 17 found different values in NAA/Cr, mI/Cr and mI/NAA ratios at hippocampus among AD, MCI and normal subjects. However at posterior cingulate, there were different results only in mI/NAA while comparing AD with controls, and AD with MCI. Moreover, they also noted good correlation between mI/NAA and level of cognitive impairment in subjects with AD and MCI.
Conflicting reports about changes of Cho in AD patient has been noted. Some studies report increased Cho. 15,18 For example, study of Mackey et al. 18 found elevated Cho/Cr ratio at posterior cingulate and precuneous in AD versus controls. It is suggested that the increase of Cho peak is due to membrane phosphotidylcholine catabolism with the purpose to offer free choline for the insufficient acetylcholine production commonly seen in AD. Cho/Cr decreases with the use of cholinergic agonist drugs in AD which may imply that down regulation of choline acetyltransferase activity may be responsible for the rising of Cho 19 However, other report no changes 4,6,10,12,20 or decreases. 5,7 This discrepancy may be results of differences in protocol MRS or anatomical variation from voxel selection.
NAA/mI or mI/NAA ratios seem to be the most useful parameters due to some reasons. They are independent of Cr values, decreasing variability resulting from age and other factors without having to calculate absolute concentrations. They are also shown to be a dependable diagnostic measure for AD versus controls with high accuracy. 15,16 Many studies compared the MRS findings in different types of dementia. Schuff et al. 21 values of NAA in subcortical ischemic vascular dementia (SIVD) compared with AD group. SIVD had reduced peak of NAA by 13% in frontal cortex and by 20% in the left parietal cortex as compared with AD subjects. Kattapong et al. 22 showed lower ratios of NAA/Cr and NAA/ Cho in vascular dementia than in AD (p<0.02). Study by Waldman 23 found higher mI/Cr ratio in AD patients than vascular dementia patients. In contrast with results of Kattapong, 22 they reported similar findings of NAA/Cr or NAA/Cho between clinical groups. They mentioned that it may reflect the small sample of control subjects and possibly the method of measuring peak heights from spectra, which are scaled to the amplitude of NAA. Ernst et al. 20 found reduction of NAA and Glx and increasing of mI at frontal lobe in frontotemporal dementia patients while there was no statistically significant frontal abnormality in AD subjects. Some patients in frontotemporal dementia group also showed Lac peak. They reported the overall accuracy for discrimination among group of 84%. Coulthard et al. 24 reported reduction of NAA/Cr in frontotemporal regions, but not in parietal lobes in frontotemporal dementia. In contrast, study of Garrard et al. 25 who used MRS to measure metabolites in the posterior cingulate in patients with subtypes of frontotemporal dementia; semantic dementia and progressive nonfluent aphasia subtypes in comparison with AD patients, reported indistinguishable findings between frontotemporal dementia and AD due to overlapped findings of decreased NAA/Cr and increased mI/Cr. MRS has been studied as a tool to predict which patient with MCI would convert to AD. Modrego et al. 26 examined 53 patients with aMCI and followed them up for average 3 years, They found by measuring the occipital NAA/Cr ration that MRS could be highly accurate in identifying the true converters. The striking finding was a 100% negative predictive value and an overall accuracy of 88.7%. Interestingly, they found no significant results by doing analysis in the hippocampal and parietal regions. They explained that these inconsistent results with the early involvement of hippocampal and parietal area in AD may be caused by partial volume effects which the large size of the voxel probable included the non-targeted tissue in the analysis or no difference in neuropathological alterations at hippocampus and parietal between converters and non-converters. Longitudinal study by Fayed et al. 27 recruited 110 subjects with aMCI with a follow up period of 29 months. They reported that MRS measuring the NAA/Cr in the posterior cingulate had sensitivity higher than 80% for predicting who is going to convert to probable AD. However, the distinction of different types of MCY was not possible using MRS.
Godbolt et al. 28 used MRS in genetic mutated carriers who have a very high risk of developing AD. The investigators demonstrated that NAA/Cr and NAA/mI ratios of carriers were significant lower relative to controls groups. Mean reductions in NAA/Cr and NAA/mI were 10% and 25%, respectively. The reduction of NAA/mI in carriers was related to proximity of expected age at onset.
Correlation between antemortem MRS results and postmortem neuropathology has been studied by Kantarci et al. 29 The authors found association among decrease in NAA/Cr and increase in mI/Cr, and higher Braak stage, higher neuritic plaque score and more typical histological findings of AD. The NAA/mI proved to be the strongest predictor of the pathologic likelihood of AD. The best correlation noted was that between NAA/ mI ratio and Braak stage.
The concordance between MRS and neuropsychological tests are dependent on the type of cognitive deficit the patient presents. Chantal et al. 30 studied the correlation between medial temporal lobe and verbal memory, parietotemporal lobe and language and visuoconstructional skills, and frontal lobe and executive functions in patients with AD, and found strong correlation between regional MRS changes and the associated-cognitive deficits mentioned above.
The ability of MRS in monitoring effectiveness of therapies in drug trials has studied. Bartha et al. 31 measured the level of NAA, Cho, NAA/Cr, Cho/Cr, and mI/ Cr in non-treated AD patients and followed them after four months of Cholinesterase inhibitor treatment; named donepezil. 1H MRS was acquired at right hippocampus. After treatment it could not be found any cognitive improvement. Decreased level of all the metabolites measured was observed. They concluded that the reduced levels of NAA indicated continued decline in neuronal loss. The decrease in mI level after treatment might indicate a subsequent reduction in reactive gliosis. However, limitations due to small number of subjects and limited time of follow-up should be considered.
Limitation. Although recent data suggest that MRS may have a role in clinical diagnosis and prognosis of AD, some limitations have to be discussed. It is important to mention that metabolites ratios provide robust in vivo markers of biochemistry but it has to be interpreted with caution because the ratios are intrinsically ambiguous and prone to misinterpretation. 32 Technical problems to adjust the TE MRS might contribute to the decrease of the test-retest reproducibility of metabolite measurements. Medial temporal region is one of the most interested site for AD patients. The anterior and mesial portion of the temporal lobe is situated nearby to the tissue-air interface close to the petrous bone. Due to the differences between brain tissue and air magnetic susceptibility, setting a homogenous magnetic field and water suppression within the 1H MRS voxel is complex. MRS can be performed by 2 techniques; single-voxel spectroscopy (SVS) or alternatively, multiple-voxel technique or known as chemical shift imaging (CSI). One of the limitations of SVS is the size of the voxel. Usually it is bigger than the majority of mesial temporal structures, promoting then an effect of partial volume averaging of the adjacent tissue. That also impairs the regional specificity of SVS. 1H MRS at higher Tesla machines would potentially give comparable SNR using smaller voxels. The duration of spectroscopic study is sometimes too long, and that can be a major limitation for less-cooperative AD patients. 33 Pitfall of MRS could be minimized by applying standard protocols.
18F-FDG PET
It has been shown very high diagnostic value of 18F-[2]-fluorodeoxyglucose positron emission tomography (FDG PET) in establishing presence of absence of AD and other neurodegenerative disease with autopsy confirmation. PET is sensitive to change over time, thus, it has value in monitoring disease worsening and therapeutic interventions. FDG PET provides glucose metabolic activity and patients with neurodegenerative dementia show reduced regional cerebral metabolism.
Prodromal AD, a pre-dementia state of mild memory loss while still retaining the ability to perform a daily routine 34 or MCI due to AD classified as new AD criteria, 35 may not have the characteristics of more severe AD. However, PET scans performed with FDG show a significant decrease in metabolism in the posterior association cortex, precuneus, and posterior cingulated. 34 These critical early-diagnostic features may be easily overlooked, as the aforementioned regions generally have a higher glucose metabolic rate than surrounding tissue; impairment would lead them to merely "blend in" to the surrounding regions rather than stand out in a qualitative assessment of an FDG PET scan. 36 Additionally, patients diagnosed with MCI with AD-like patterns in FDG PET produced scans have been found to eventually develop AD. 34 These findings demonstrate that FDG PET can potentially be used to predict conversion from MCI to later-stage AD.
Recent study carried out by Shokouhi et al. 37 proposed a imaging classifier that correlates regional metabolic changes over time, termed regional 18F-FDG time correlation coefficient (rFTC). They have performed a baseline scan and repeated it within an average time of 4.3 ± 1 year. They used linear mixed-effects models to determine different decline rates of rFTC between controls and individuals at risk for AD, then found the association between each subjects' rFTC and cognitive test results. Constant rFTC of controls subjects were found over time whereas in MCI, the values dropped much faster than seen in controls by an additional annual change of -0.02. The decline in rFTC of MCI subjects was also associated with change of cognition. The investigators concluded that this classifier method could be used to monitor cognitive deterioration and disease progression.
Characteristic findings in regions mentioned above highlight the importance of integrating FDG PET more in clinical settings because of its power as an early diagnostic tool. Landau et al. compared the performance of FDG PET with the Functional Activities Questionnaire (FAQ), which is often used to monitor functional abilities in a clinical setting. 38 It was found that while the FAQ might not catch small changes in a patient's cognitive decline and that FDG measures were strongly associated with a change in FAQ results, illustrating FDG PET's potential to supplement more subjective, clinical forms of diagnosis.
Hallmarks of progressed AD shown by FDG PET include evidence of hypometabolism in posterior regions of the brain, more particularly the temporoparietal region and posterior cingulate ( Figure 2). 39 Impairment of the frontal cortex may also be included, but this is associated with later-stage AD and may not occur initially ( Figure 3).
Herholz et al. concluded that hemispheric asymmetry might be present, which could be responsible for language and visual impairment. 36 PET imaging also demonstrates that certain areas of the brain that have been spared impairment in AD, especially the basal ganglia, thalamus, cerebellum, and cortex. 40 Mosconi et al. suggests that AD-related processes may affect the entorhinal cortex and other regions of the brain, which may facilitate functional impairment. 41 The initial degree of hypometabolism determined by PET has been shown to correlate with the magnitude of future decline. 40 Therefore, in addition to showing key characteristics of AD-caused neurological damage, FDG PET has the ability to map the progressive cognitive decline of AD. FDG PET reveals that as AD progresses, parietotemporal hypometabolism becomes increasingly bilateral in addition to the frontal cortex becoming more hypometabolic. 34 Comorbid conditions can affect the specificity of these predictions. Depressed patients and individuals with abnormal thyroid function have a higher false positive rate for being expected to experience progressing cognitive decline.
In addition to FDG PET's ability to develop imagebased diagnostic criteria for AD, it also has the ability to distinguish AD from similar neurodegenerative conditions. AD and other types of dementia have a characteristic pattern of FDG PET imaging which can be used to differentiate diagnosis in early stage when the specific type remain unclear. FTD, which is often misdiagnosed as AD in its early stages, is characterized by behavioral and language disturbance. Therefore, it could be difficult to distinguish from early AD symptoms in a clinical setting. However, that distinction is easier with FDG PET since reduced regional glucose uptake is seen in frontal and anterior portion of temporal lobes in FTD while that metabolic deficit is seen more in the posterior areas of the brain in AD (Figure 4).
Foster et al. showed sensitivity of 97% and specificity of 86% for distinguishing between AD and FTD in the large series of autopsy-confirmed diagnosis. 39 Other similar conditions, Dementia with Lewy Bodies (DLB), FDG PET shows reduced metabolism in parieto-occipital areas like the primary visual cortex and occipital association areas with normal glucose uptake at association temporal and posterior cingulate cortex, whereas occipital cortex is preserved in AD ( Figure 5). In a study of Berti 42 using postmortem diagnosis, occipital hypometabolic finding can distinguish DLB from AD with 83-90% sensitivity and 80-87% specificity. Other metabolic patterns have been reported in Dementia with Parkinson disease, vascular dementia and Huntington disease. 40 These findings show that FDG PET is a valuable tool for differen- tial diagnosis between neurological disorders that may appear to be very similar.
Alzheimer's Disease Neuroimaging Initiative (ADNI) and post-mortem studies demonstrated evidence for the power of FDG PET as a biomarker for AD. From reviews literature, 43 studies that used clinical assessment as the standard provided pooled accuracy of 93%, 96% sensitivity and 90% specificity for distinguishing AD subjects from normal subjects. Silverman et al. 44 used neuro-pathological confirmation as the reference standard in testing patients with dementia. Among 138 autopsied subjects, including 97 with confirmed AD, FDG PET yielded the sensitivity of 94% and specificity of 73% for AD diagnosis. FDG PET bears also some prognostic value since it can differentiate a progressive versus nonprogressive course according to the pattern of metabolic changes seen on FDG PET. It showed a negative likelihood ratio of 0.1 (95% confidence interval, 0.06-0.16) from a negative PET scan.
Global disease assessment enhances the accuracy of measurement of FDG PET imaging. The principle of the global metabolic activity is based on multiplying partial volume corrected average SUV to the volume of the organ of interest obtained from anatomical modalities (CT/ MRI), the result of multiplying can be named as metabolic volumetric product (MVP). It was first introduced by Alavi et al. 45 by assessment the brain in AD patients and age-matched controls. They found that by multiplying segmented brain volumes from MRI by mean cerebral metabolic rates for glucose, significant differences between two groups can be demonstrated. This approach requires calculating tissue volume by utilizing modern computer based algorithms and partial volume corrected measurement of metabolic activities at each site of interest. By the same concept, other study by Alavi et al. 46 investigated 20 patients with probable AD and 17 similar age controls who underwent FDG PET and MRI. They found that atrophy-weighted total brain metabolism (calculated by multiplying the brain volume by the average metabolic rate) showed a very significant difference between two groups (29.96 ± 7.9 for AD and 39.1 ± 7.0 for controls, p<0.001). Absolute whole brain metabolism (calculated by multiplying Atrophy-corrected average CMRglc by brain volume) also showed significant difference which were 37.24 ± 9.65 in AD and 45.09 ± 8.52 in controls, p <0.014). These measurements correlated with mini-mental status exam (MMSE) score. Recent studies carries out by Musiek found that whole brain metabolic volumetrix product (MVP) were significantly lower in AD and accurately distinguished AD patients from controls. 47 Limitation. Some of the limitations include the creation of artifacts and noise during FDG PET image construction, the disadvantages and potential sources of error in both qualitative and quantitative analysis techniques, and disadvantages in semi-quantitative methods. FDG PET imaging can also be affected by pre-existing patient conditions or errors made in protocol during the scanning process.
Another notable limitation, which has been stud-ied extensively, is partial volume error (PVE). Incorrect measurements of tissue activity are due to the limitations of scanners to process structures smaller than 2-3 times the full-width-at-half-maximum spatial resolution of the scanner, 41 especially in atrophic brain of elderly subjects or AD patients. PVE can also be caused by an incorrect superposition of voxel parameters onto brain tissue causing voxels to contain different tissue types, or tissue fraction. Additionally, patient motion or the movement of either the circulatory or respiratory systems can generate PVE. 48 Because analysis PET images are dependent on measurements of metabolic activity, and because differential patterns of glucose uptake serve as important characteristics for neurological conditions, it is important that PVE be corrected in order to prevent misdiagnosis or images that show no evidence of abnormality for cases where abnormalities are truly present. Currently, there exist a variety of methods for partial volume correction (PVC), which seeks to curb the problems caused by PVE. Methods to reduce PVE can include techniques which utilize anatomical information to correct individual voxels, specific regions of interest (multiple or single), or whole images. Other techniques include post-construction methods, using projection data to obtain region of interest (ROI) mean values, or methods that allow for a gradient of activity levels within each region to correct for the assumption that activity within each region is uniform. Techniques to address tissue fraction have also been developed, including methods where edge voxels are treated as multiple tissue types. 48 Mosconi et al. notes the relative lack of studies used to examine individual cases of MCI, which may be preventing a more detailed understanding of MCI features on an individual level, as well as the dearth of studies which compare MCI to disorders other than AD. 49
AMYLOID PET
The first amyloid-β (Aβ) PET exam in human was introduced in an individual with probable AD using the 11C-labeled radiopharmaceutical Pittsburgh Compound B (PiB). Amyloid imaging was repeatedly claimed that it is very sensitive technique for the in vivo identification of amyloid plaques into the brain tissue, non invasively, therefore allowing an early confirmation of AD. The normal pattern of amyloid imaging is the white matter deposition of PIB compound, with no cortical uptake ( Figure 6).
The other cortical regions including the hippocampal and amygdala did not show any remarkable PiB uptake compared to controls. Subcortical WM, pons and cerebellum which are unaffected by amyloid deposition showed low PiB binding. At the mean time that PiB was developed, Shoghi-Jadid et al. 55 used FDDNP labeled with fluorine-18, a hydrophilic radiofluorinated derivative of 2-(1-6-(dimethylamino)-2-naphthylethylidene) malononitrile (DDNP), as a PET tracer to track the deposit sites of neurofibrillary tangles (NFTs) and Aβ senile plaques in the living AD patients. 18F FDDNP have been postulated to recognize amyloid plaque as well as NFTs in living human. Moreover, it is the only imaging agent which visualizes AD pathology in hippocampal region in vivo. 18F FDDNP accumulates significantly in several cortical areas of patients with AD. 56 Small et al. 57 reported significantly lower values of FDDNP-PET binding in the whole brain in control group compared to the MCI group as well as lower values in MCI group compared to AD.
Recently three new, longer-lived 18F tracers including 18F florbetapir, 18F florbetaben and 18F flutemetamol have been brought to research and clinical use. In 2012, the Food and Drug Administration (FDA) approved the clinical use of Aβ probe AmyvidTM (18F Florbetapir) for evaluation of patients suspected AD. Clark et al. 58 used 18F Florbetapir to predict the presence of Aβ in the brain at autopsy. A good correlation was obtained between the visual interpretation of 18F Florbetapir PET imaging and the autopsy findings that confirmed the deposition of Aβ in the brain tissue, according to the standard pathological criteria to define AD. A very high rate of agreement (96%) was seen between amyloid PET imaging and histological confirmation of Aβ. Another study in correlation of 18F Florbetapir and postmortem histopathology was performed by Choi et al. 59 There was very good correlation of Aβ plaques identified by specific pathological staining techniques, including silver staining and special immunohistochemical assays, and Florbetapir PET imaging pattern. Fleisher et al. 60 brought 18F Florbetapir PET to clinical cohort of 210 subjects including probable AD, mild MCI and older healthy controls. The data were pooled from four phase I and II clinical trials that used 18F Florbetapir PET imaging under similar protocols. The authors reported that mean (SD) cortical-to-wholecerebellar SUVRs were significantly distinct among the 3 groups and in the expected direction: 1.39 (0.24) for the probable AD group, 1.17 (0.27) for the MCI group, and 1.05 (0.16) for the controls group (P=2.9x10−14). There also found significant difference of percentage meeting levels of amyloid associated with AD by SUVR criteria (SUVrs greater than or equal to 1.17) and percentage meeting SUVr criteria for the presence of any identifiable Aβ (SUVrs greater than 1.08) among three groups. There was also a strong and direct correlation of florbetapir cortical retention with aging and the presence of APOE ε4 allele (p=0.048).
18F florbetaben has also been shown to bind with Aβ in brain and selectively labeled Aβ plaques and cerebral amyloid angiopathy (CAA) in AD tissue. 61 Phase II study 62 proposed sensitivity of 80% and specificity of 91% for discriminating individuals with probable AD form agematched controls. Phase III studies in 238 patients from 17 centers have reached completed. 63 The investigators claimed 100% sensitivity and 92% specificity of 18F florbetaben PET at subject level analysis but 77% sensitivity and 94% specificity for regional detected Aβ as compare with postmortem diagnosis. Ong et al. 64 found high Aβ burden in 53% of MCI subjects when using SUVr 1.45 as a threshold. There is a good direct correlation between 18F florbetaben and PiB global SUBr values with almost same diagnostic power to differentiate AD from healthy subjects. 65 18F flutemetamol PET with visual assessment has been reported 93.1% sensitivity and 93.3% specificity against standard of truth among AD, MCI and healthy controls subjects. 66 Duara et al. 67 suggested an additive information from 18F flutemetamol PET and sMRI in classifying amnestic MCI subjects. The overall correct classification rate for amnestic MCI from 18F flutemetamol PET using SUVr 1.4 and medial temporal atrophy derived from sMRI was 86%. Longitudinal study 68 in AD and amnestic MCI with 2-year follow-up reported 18F flutemetamol PET SUVr showed clear group clustering while hippocampal volume showed extensive overlap between group. A longitudinal study showed that more that 89% of the converters came from the positive flutemetamol group. Pooled results of phase III studies in 18F flutemetamol have not been announced yet.
Johnson et al. 69 reviewed recent publications in clinical dementia setting and reported 96% of AD patients were amyloid positive. On the other hand, amyloidnegative scans in patients with the diagnosis of probable AD would represent imprecise clinical diagnosis or that patients bear very small amount of tissue amyloid plaques that PET could not detect, and by following them up it will be detected years ahead.
Although a number of new PET probes are currently under investigation in academia and under development by pharma companies, there are some concerns with respect to the clinical value of Aβ imaging and questions have been recently raised. Moghbel et al. 70 reviewed the technical aspects and described several potential problems, such as partial volume effects resulting in underestimated SUV data, high ratio of nonspecific to specific WM uptake and discordance between the concentration of Aβ in the brain with histopathological and immunohistochemical studies and question about the specificity of these tracers. Investigators in amyloid imaging field have answered some Moghbel's questions, 71 however, some issues still need to be clarified. Kepe et al. 72 proposed the lack of in vivo binding validation of these probes and the consequent deficiency in the understanding of their tissue binding and specificity. It is uncertain how amyloid agents interact with many form of Aβ. Lockhart et al. 73 demonstrated that PiB clearly delineated classical plaque as well as diffuse plaque and CCA. It was also found to label NFTs with lower intensity than Aβ pathology. Cairns et al. 74 reported case diagnosed mild AD whose PiB PET showed unremarkable but positive biofluid markers. However, autopsy performed 2.5 years after scan showed lesions that met neurofibrillary stage III and Braak and Braak stage C. There was no evidence of any other neurodegenerative or clinically meaningful vascular disease. Aβ deposition is also an important pathology in Downs's syndrome. In addition, Aβ has been reported as an additional pathology in Parkinson's disease, dementia with Lewy Bodies, Pick's disease, corticobasal degeneration, amyotrophic lateral sclerosis and progressive supranuclear palsy. 75 Ly et al. 76 found nearly most ischemic stroke patients in their study has a high PiB uptake within the peri-infarct region compared to the contralateral side, particularly in the WM around the infarct region. The cause of the focal PiB retention was uncertain and requires further investigation. There are also evidence that suggests even cognitively-normal patients may have high levels of 11C-PIB, ligand used to detect Aβ, suggesting that a large degree of Aβ buildup may not always translate into the development of AD symptoms. Healthy elderly controls can also show high PIB retention. 77,78 Some PIB positive elderly healthy controls have demonstrated normal cognition. 79 Moreover, it is common to see numerous degenerative changes including NFTs and Aβ plaque in a large number of cognitively normal individuals. 80 Additionally, the rapid peripheral and central metabolism of these probes and the brain transport of metabolites are severe limitations at the very heart of the tracer design and development. These limitations cause extensive and nonspecific uptake of amyloid agents in WM which affects both AD patients and controls. 72 Many studies found non-negligible WM uptake in both AD and controls. 65,66,81 Recent study by Barthel 62 reported the highest 18F florbetaben SUVr in cerebral WM as compared to other cortical and subcortical regions. Nonspecific WM uptake can produce spillover and partial volume effect into neighboring GM which should be concerned in atrophic AD brain. The extensive WM uptake can make unreliable imaging interpretation. Moreover, this phenomenon provide additional evidence that PiB and stilbene derivatives are nonspecific to Aβ target as some studies showed these probes can bind to myelin with high affinity. [82][83][84] Villemagne et al. 71 have addressed that this WM uptake is similar between AD and normal controls and that partial volume effect is not an exclusive limitation to amyloid PET imaging but affects equally all PET image procedures. They claimed even by knowing that this limitation had not proven to be a major obstacle to the quantitative analysis of Aβ deposits in cortical GM, visual assessment was of higher priority than absolute quantification and localization for many clinical purposes. However, the authors accepted that a lot of improvements must be accomplished regarding the development of more sensitive and specific probes, with lesser WM concentration, and that will allow the incorporation of more suitable imaging tools to quantify and better classify patients with cognitive impairment.
In present days, it well recognized that Aβ deposition starts in the preclinical AD, increases up to the time when the AD diagnosis is confirmed clinically, and then remains under a plateau as disease progresses. Cerebral amyloidosis itself is not sufficient to promote cognitive deficits in AD which is more related with FDG PET and sMRI as the biomarker of neurodegeneration. Anti-Aβ therapies have been repeatedly reported to be ineffective. Thus, there is no validated clinical value of amyloid imaging in monitoring disease progression.
Considering the limitations discussed above, the amyloid imaging demands careful discussions in the proper clinical utility. Recently, the Society of Nuclear Medicine and Molecular Imaging (SNMMI) and the Alzheimer's Association (AA) have developed the appropriate use criteria for amyloid PET. 85 It is suitable for individuals with stable or progressive unexplained MCI, satisfying core clinical presentation either an atypical clinical course or an etiologically mixed presentation and progressive dementia, and atypically early age of onset. Patients with one of these appropriate criteria should have the following characteristics: 1) a cognitive deficit confirmed by an objective neuropsychological test; 2) A diagnosis of possible AD, but when the diagnosis is uncertain after a comprehensive evaluation by a dementia expert; and 3) when the recognition of the pathological status of Aβ is expected to increase diagnostic certainty and change management. The inappropriate situations include patient that fulfill the diagnostic criteria for probable AD under typical age of onset, to determine the level of cognitive impairment, based solely on a positive family history of dementia or APOE ε4 presentation, unconfirmed clinical examination of cognitive impairment, suspected autosomal mutation carriers, asymptomatic individuals and nonmedical use such as legal, insurance coverage, or employment screening.
However, there is a lot of skepticism regarding the value of amyloid imaging to significantly change outcomes and management of patients with prodromal and even AD. The main issue is that Aβ PET findings are not specific to AD and about 30% of older people have Aβ and do not have AD and will not have AD. 86 Then in July 2013, the Centers for Medicare and Medicaid Services (CMS) released a draft decision memo indicating that Medicare would pay for contrast-enhanced PET scans aimed at visualizing beta-amyloid protein plaques in patients brains only in the contest of rigorous clinical trials, under the agency's "coverage with evidence development" (CED) policy. The decision mainly focuses on the role of positive scan, while the guideline of SNMMI and AA considers both on positive and negative findings which negative finding would rule out an AD. CMS reported that use of the scans to exclude AD in narrowly defined and clinically difficult differential diagnoses is promising. Nevertheless, CMS acknowledged that more evidences need to be discovered, including when the scan would replace or complement other biomarker for particular patient subpopulations.
Limitation. Amyloid imaging tracers do not meet the fundamental advantage of PET that is different from other imaging modalities as the ability of quantitative functional assessment of specific tissue in human. Appropriate amyloid PET probe should provide the signal only from Aβ retention and its peripheral metabolites should be minimize or pass blood-brain-barrier that can be predicted for quantification. For recent evidence, the in vivo specificity of the amyloid agents has not been fully established and the sources of non-specific uptake have not been identified. Moreover, the technical limitation in PET system has not been corrected.
Even though all limitations are not considered, the diagnostic value of amyloid imaging is still questionable.
Current criteria for the neuropathological diagnosis of AD by National Institutes of Aging-Alzheimer's Association 87 uses 3 parameters including (A) immunohistochemistry-derived Aβ plaque score described by Thal et al., 88 (B) NFTs stage from immunohistochemistry for tau or phosphor-tau, and (C) neuritic plaque score from Thioflavin S or modified Bielschowsky as recommended by Consortium to Establish a Registry for Alzheimer's disease (CERAD) protocol to obtain "ABC" score and transform into one of four levels of AD neuropathologic change: Not, Low, Intermediate or High. For Aβ plaque score, other method that identifies progressive accumulation of Aβ deposition in medial temporal lobe only is recommended as it is highly correlated with Thal phases. 88 Present status of amyloid imaging may provide information of neuritic plaque that fulfills only criteria (C), however, it cannot yield appropriate signal in the medial temporal lobe and insensitive to tau deposition. Thus, amyloid imaging shows no enough strong evidence that it is suitable for AD diagnosis which is the most indication that described in literatures.
Author contributions. All authors contributed substantially to the preparation and revision of the manuscript. | 8,136.8 | 2015-10-01T00:00:00.000 | [
"Physics"
] |
Level of Use and Satisfaction of E-Commerce Customers in Covid-19 Pandemic Period: An Information System Success Model (ISSM) Approach
Pandemic outbreaks of COVID-19 have made customers take drastic steps to help world governments to prevent further spread, one of which is by social distancing. This policy made buying and selling online a convenient option to fulfill the needs for goods and/or services. The purpose of this study was to determine the level of use and satisfaction of e-commerce customers in the COVID-19 pandemic period with the information system success model (ISSM) approach that was formed through system quality, information quality, and service quality. The research method used a quantitative approach by distributing questionnaires to respondents of 206 ecommerce costumers. Data analysis used Structural Equation Modeling (SEM) where the results confirm that system quality, information quality, service quality, affected the level of use and user satisfaction of e-commerce customers. E-commerce companies are recommended to maintain, even improve system quality and information quality because information that is less interesting, less relevant, and difficult to understand results in low information quality, which in turn, can reduce the level of use and customer satisfaction. © 2020 Tim Pengembang Jurnal UPI Article History: Received 16 Apr 2020 Revised 03 May 2020 Accepted 05 May 2020 Available online 11 May 2020 ____________________
INTRODUCTION
Electronic commerce (e-commerce) is growing in popularity in the world economy. It began in 1995 when the need for digital goods was needed to maintain company transactions. Digital goods are things that can be sent via digital networks (Laudon & Laudon, 2015). E-commerce is sales made through electronic media. E-commerce is also one type of electronic business mechanism that focuses on individual-based business transactions using the Internet as a medium for exchanging goods or services (Surawiguna, 2010). E-commerce brings big business opportunities such as product sales and online service provision and revenue growth (Rohm & Swaminathan, 2004) especially for companies such as e-retailers because of their easy and interactive nature, lower costs, high customization and personalization for customers (Santos-Vijande et al., 2013).
E-commerce has become one of the important strategies in business today, because e-commerce can increase the level of efficiency in company operations. The types of e-commerce based on the type of relationship consist of (Abdu'a & Wasiyanti, 2019): (a) Business to Business (B2B), a type of inter-company transaction to other companies. For instance, a distributor gets their goods from a manufacturer. Prices that occur are often adjusted to the number of orders and negotiations; (b) Business to Consumer (B2C). Transactions that occur are usually directed to end consumers, where the seller can be a distributor, as a producer or as a retailer. (19), occurring in Wuhan, has had a major impact not only in the health sector, but also on the economy, including economic activities related to the process of purchasing goods or services. The condition of the corona pandemic COVID-19 followed by the implementation of social distancing led to new consumer behavior in determining purchasing patterns (https://nasional.kompas.com/read/2020/04/25/154 72271/update-25-april-kasus-covid -19-di-indonesiamencapai-8607). Consumers are now using online tools and are increasingly looking for online channels to research and buy the products or services they need (Dirgantari et al., 2019). This also makes competition between ecommerce companies more stringent.
The outbreak of Covid-19 in Indonesia has made online shopping the choice of many parties to obtain goods as shown in Table 1 where there has been an increase in the number of visits to various marketplaces in Indonesia.
Based on the commercial data in Indonesia, the total estimated E-Commerce transactions during the Covid-19 pandemic reached a peak after the announcement of the Large-scale Social Limitation (PSBB) policy on March 31, 2020, which was 670,755 transactions. Total estimated sales during this period were quite high at Rp 12.3 billion. This e-commerce activity takes place using a variety of platforms as shown in Figure 1 where Indonesia has the highest level of e-commerce usage among countries in the world, with 90% of the country's Internet users aged between 16 and 64 who report that they have purchased products and services online as reported by GlobalWebIndex (Kemp & Moey, 2019).
E-commerce companies should increase customer satisfaction to maintain and even increase online buying and selling transactions. In the context of e-commerce, customer satisfaction is usually defined as comparing from other e-commerce compa-nies that cause customer re-purchases. Customer satisfaction can be defined as a favorable customer attitude towards ecommerce that results in consumers making repeat purchases (Goid et al., 2011). Kotler and Keller (2016) argue that satisfaction is the level of feeling in which someone states the results of a comparison of the performance of products or services received and expected after making a purchase or use. Consumer satisfaction, according to Kotler and Keller (2016), is a feeling of pleasure or disappointment someone who comes from a comparison between his impression of the results of a product with expectations. Customer expectations are believed to have a large role in determining satisfaction (Meidita et al., 2016 Customer expectations are customer beliefs before trying or buying a product used as a standard or reference in choosing the product's performance. One factor in winning thse competition can be seen from the number of customers who use the products or services offered by the company (Zeithaml et al., 1993). The increasing level of satisfaction will increase the tendency of consumers to repurchase products offered by the company. If consumers have felt satisfied, it will create a good cooperative relationship between consumers and companies (Buttle & Maklan, 2019).
One key to success of e-commerce is information systems. The success of ecommerce must be balanced with a good information system because without it, there will be no transactions (Abdu'a & Wasiyanti, 2019). The information technology system success model is one with a good information system using sophisticated technology, complete but simple. Information system success is designed simply to review the use of information systems (Widiaty et al., 2020). Thus, the benchmarks of information system success can be viewed from a simple model but are considered valid (Hartono et al., 2010).
Since its publication in 1992, nearly 300 articles in journals refer to the use of DeLone and McLean (2003) as a successful model as a basis for measuring variables related to information systems (IS). This model is based on Shannon and Weaver's classic communication theory adjusted by Mason to measure impact. As a strong communication and trade intermediary, the Internet is a communication and phenomenon that encompasses itself into a measurement framework (for example, DeLone and McLean model) that is built based on communication theory. In the context of ecommerce, the main system users are customers or suppliers rather than internal users. Customers and suppliers will use the system to buy or sell and carry out business transactions. These electronic decisions and transactions can affect each individual user, organization, industry and even the national economy. There are six variables that affect the quality of e-commerce systems based on the Information System Success Model (ISSM) from DeLone and McLean, namely (Angelina et al., 2019): (1) information quality, (2) service quality, (3) usage, (4) customer satisfaction, (5) internet benefits. In this study, the measurement is limited to measuring customer satisfaction.
The use of information system technology has proven to be able to reduce costs, create faster and more efficient work processes, and offer a high degree of flexibility. Delone and Mclane model updates have been widely used, including using the modified Delone and Mclane model to measure the success of e-learning systems at universities. It is also used to evaluate the success of information systems management projects, and can be useful for decision making in organizations in evaluating the implementation of information systems (Gu & Jung, 2013).
Thus, the purpose of this study was to determine the level of use and satisfaction of ecommerce customers during the Covid-19 pandemic by using the information system success model (ISSM) approach formed through system quality, information quality, and service quality.
METHODS
This research used a quantitative method with data obtained from the respondents' direct responses through questionnaires to explore more specific information about the description of the level of usage and e-commerce customer satisfaction in the Covid-19 pandemic period with the Information System Success Model (ISSM) approach through System Quality p-ISSN 2528-1410 | DOI: https://doi.org/10.17509/ijost.v5i2.24617| e-ISSN 2527-8045 (SQ), Information Quality (IQ), Service Quality (SerQual) and Use (U).
The dimensions used for System Quality (SQ) are adaptability, availability, reliability, response time and usability. Dimensions of Information Quality (IQ) include completeness, ease of understanding, personalization, relevance and security. The Service Quality (SerQual) dimension is assurance, empathy and responsiveness. Dimensions of Use (U) include nature of use, navigation patterns, number of site visits and number of transactions executed, and dimensions for User Satisfaction (US) are repeat purchases, repeat visits and user surveys (Delone & Mclean, 2003).
The data analysis technique used is Structural Equation Modeling (SEM) where there are basic assumptions that need to be met, one of them regarding sample size. The sample size for SEM models with the number of latent variables (constructs) up to 5 and each construct is explained by several indicators, the sample size of 100-150 respondents has been considered adequate (Saputro et al., 2015), while Ghozali (2014) suggested the SEM sample size between 100 to 200 respondents. Large sample sizes are very critical to get the right parameter estimates. Then the number of samples in this study was determined by 200 Shopee customers because Shopee is ranked first in the top five e-commerce sites in Indonesia (https://iprice.co.id/insights/mapofecommerce/). This research used a Likert scale with a range of strongly disagree (score 1) to strongly agree (score 5) and uses purposive sampling method. The survey was conducted for 2 months on e-commerce shopee customers during March-April 2020.
The research model are shown in Figure 2,
RESULTS AND DISCUSSION
After the data were processed, 206 valid respondents were obtained. Demographic characteristics as shown in the Table 2 shows 62.14% of respondents were female and 37.86% are male, while the majority of the age group of respondents were the age group of 20-24 years by 88%. The majority of respondents had more than 5 times online shopping experience.
To determine the internal reliability of the variables used in the model, Cronbach's alpha for responses taken. The results as shown in Table 3 indicate that Cronbach's alpha value for all variables are more than 0.8 which mean all variables are reliable. The results show that the research model was good, as shown by model fit indices in Table 4.
To test the hypotheses and structural models, confirmatory factor analysis is used for evaluate the significance and strength of the path. The results are shown in Figure 3 and Table 5. The path coefficient shows that of the 7 relationships tested, all are significant, which means that the system quality, information quality, service quality, significantly affected the use and user satisfaction.
As seen in Table 5, H1, that is, System Quality affects Use shows that system quality can increase use and this result is in accordance with the study conducted by Veranika and Murtini (2017) that in as-sessing the use variable, system quality still has opportunities in implementing the system. The system quality implementation aims to develop the system in increasing use. Developers can improve Shopee's system quality as a criteria expected by users so that the level of use continues to increase in the future, whereas H2, which is the Effect of System Quality on User Satisfaction, shows that system quality can increase user satisfaction. This is consistent with research conducted by Sultono et al. (2016) which needs to be evaluated periodically by the management of system quality by involving users, in order to ensure that user needs are met so that user satisfaction is achieved. The result of H3, which is the Effect of Information Quality on Use, shows that information quality can increase use. In this research, information quality has a positive effect on use. However, based on research conducted by Rahayu et al. (2018) states that information quality does not have a significant effect on use. Unattractive presentation of information, inaccurate information relevance, and language that is not easily understood result in a lack of information quality.
The effect of Information Quality on User Satisfaction (H4) shows that information quality can increase user satisfaction. This is in accordance with the research conducted by Wibowo (2013), that information quality has the most dominant influence on user satisfaction because information quality relating to the delivery of information about the stages that must be passed by the user is clearly conveyed (with pictures that make it easy for users to digest each well stages). Whereas H5, which is the Effect of Service Quality on Use, shows that service quality can increase use. Research conducted by Kallweit et al. (2014) explains that their research on the effects of service quality mediation in technology implementation models found that service quality is part of mediating effects on attitudes or intentions to use a service or product. Therefore, retailers must emphasize service quality related to use.
Hypothesis six results (H6) Effect of Service Quality on User Satisfaction shows service quality can increase user satisfaction. In a study conducted by Susnita (2020), service quality has a positive and significant effect on user satisfaction. While the research conducted by Mulyapradana et al. (2020) shows that service quality does not have a significant effect on user satisfaction because items from service quality that have a p-ISSN 2528-1410 | DOI: https://doi.org/10.17509/ijost.v5i2.24617| e-ISSN 2527-8045 significant effect are only responsiveness items. While Hypothesis seven (H7) Effect of Use on User Satisfaction shows system quality can increase user satisfaction. This is in accordance with research conducted by Dwivedi et al. (2013) which explains that from this study factors such as system quality and use affect consumer attitudes positively so that they are related to user satisfaction.
CONCLUSION
Overall, the use of the information system success model (ISSM) approach formed through system quality, information quality, service quality has proven to affect the level of usage and ecommerce consumer satisfaction, especially in the current co-19 pandemic. The ecommerce company must continue to improve the system quality as the criteria expected by the user so that the level of its use will increase in the future and still maintain and even improve information quality because of the presentation of information that is less attractive, the relevance of information that is less precise, and language that is not easily understood is a deficiency from the quality of information which can reduce the level of use and customer satisfaction.
ACKNOWLEDGEMENTS
The researchers would like to offer gratitude to Universitas Pendidikan Indonesia for supporting this research.
AUTHORS' NOTE
The author(s) declare(s) that there is no conflict of interest regarding the publication of this article. The authors also confirm that the data and the paper are free of plagiarism. | 3,458 | 2020-09-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
SASO: Joint 3D Semantic-Instance Segmentation via Multi-scale Semantic Association and Salient Point Clustering Optimization
We propose a novel 3D point cloud segmentation framework named SASO, which jointly performs semantic and instance segmentation tasks. For semantic segmentation task, inspired by the inherent correlation among objects in spatial context, we propose a Multi-scale Semantic Association (MSA) module to explore the constructive effects of the semantic context information. For instance segmentation task, different from previous works that utilize clustering only in inference procedure, we propose a Salient Point Clustering Optimization (SPCO) module to introduce a clustering procedure into the training process and impel the network focusing on points that are difficult to be distinguished. In addition, because of the inherent structures of indoor scenes, the imbalance problem of the category distribution is rarely considered but severely limits the performance of 3D scene perception. To address this issue, we introduce an adaptive Water Filling Sampling (WFS) algorithm to balance the category distribution of training data. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods on benchmark datasets in both semantic segmentation and instance segmentation tasks.
Introduction
Scene perception plays a decisive role in many applications, such as autonomous driving, robot navigation and augmented reality.
With the growth of computer technology and artificial intelligence in recent years, scene perception ability of intelligent devices has received increasing attention from both academia and industry, especially for the 3D scenes which can represent the real environment intuitively.Semantic segmentation and instance segmentation of 3D scenes are the fundamental and critical portions of 3D scene perception.Nevertheless, how to model the 3D space into digital shape to accomplish scene segmentation task is an indefinite problem.Various representations of 3D scenes have been investigated, such as depth maps, voxels, multi-views, meshes and point clouds.Based on these representations, a series of excellent works have been investigated to operate segmentation task, such as [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18].Among these representations, point clouds are the most compact and natural to the geometric distributions of real 3D scenes, which have been applied extensively in recent researches.In terms of semantic and instance segmentation tasks in 3D point clouds, based on the great success achieved in recent years [1,2,5,6,[9][10][11][12][13][14][19][20][21] for each single task, joint learning methods for both tasks [14][15][16] have opened up a new effective way to explore the 3D scene segmentation, which improved the performance and promoted further development.Compared with the method [14] exploiting similarity matrix, [15,16] utilized clustering algorithm to generate instance segmentation result, which was proved to be more effective and flexible.Nevertheless, whether the convergence direction of the training process is consistent with the orientation of clustering algorithm was rarely considered.Additionally, the marginal points are usually harder to be distinguised than the central points, and in multiple objects case the internal points are easier to be distinguished than the boundary points across objects, as shown in Figure 3.To address this problem, we propose a Salient Point Clustering Optimization (SPCO) module to introduce clustering into the training process and saliently focus on the points that are harder to be distinguished in the clustering process.As for semantic segmentation, the spatial distribution of the semantic information has a strong association, which can be further exploited.For example, when a point comes from table, it is highly possible that there will be some neighbor points belonging to chair other than from ceiling.The most common approach to explore the semantic associations is the Conditional Random Fields (CRF) algorithm [22], which utilizes normalization based on statistical global probability and has been proved to be effective in segmentation tasks.However, CRF is complex and consumes plenty of resources, how to sufficiently exploit the semantic associations more efficiently is an indefinite problem.Consequently, we propose a Multi-scale Semantic Association (MSA) module to fine tune the semantic segmentation results, which is based on the multiple scale semantic association maps generated by statistical analysis.In addition, because of the inherent structures of indoor scenes, the imbalance problem of the category distribution badly limits the performance of 3D scene perception.For example, wall and floor certainly exist in every room while other categories may not, such as sofa, sink, bookshelf, etc..This leads to the numbers of points from wall and floor are much more than the one from other categories.The imbalance problem is rarely considered in previous wokrs.Thus, we present an adaptive Water Filling Sampling (WFS) algorithm to address this problem by changing the sampling probabilities of each category adaptively.
To summarize, our contributions are the following: • We propose a Salient Point Clustering Optimization (SPCO) module to introduce clustering into the training process and saliently focus on the points that are harder to be distinguished in instance segmentation.
• We propose a Multi-scale Semantic Association (MSA) module based on statistical knowledge to explore the potential spatial association of the semantic information in point clouds.
• We propose an adaptive Water Filling Sampling (WFS) algorithm to balance category distribution in the point clouds, which is rarely considered but critical in 3D scene perception.
• Extensive experiments demonstrate that our SASO outperforms the state-of-the-art related methods on benchmark datasets in both semantic and instance segmentation criteria.For the instance segmentation (green), we proposed SPCO module to introduce clustering into the training process and focus on hard-distinguished points, which will be explained in Sec 3.3.
Related Works
This section reviews recent deep learning-based techniques applied to 3D point clouds.In recent years, a series of deep learning architectures have been proposed to perform the encoding and decoding for 3D point clouds or its derived representations, which are widely utilized in many 3D vision tasks such as semantic and instance segmentation, object part segmentation and object detection.We divide these methods into four categories based on the data representations.
Further more, we will introduce recent 3D semantic and instance segmentation research progress based on above techniques.
Volumetric Methods
Due to 3D point clouds are irregular, the most simple but naive method is to voxelize the irregular point clouds to regular 3D grids so that 3D convolutions can be applied [1,6,[23][24][25][26][27][28][29][30][31][32].Specifically, Wu et al. [23] represented a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network.Maturana et al. [31] proposed an architecture to efficiently deal with large amounts of point cloud data by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network.Zhou et al. [25] removed the manual feature engineering for 3D point clouds and divided point clouds into equally spaced 3D voxels, then transformed a group of points within each voxel into a unified feature representation through a newly introduced voxel feature encoding layer.Wang et al. [1] designed a spatial dense extraction module to preserve the spatial resolution during the feature extraction procedure, alleviating the loss of detail caused by sub-sampling operations such as max-pooling.Although volumetric data representation is the most common and simplest form, there is an obvious drawback that cubic complexity of 3D convolutions leads to a dramatic increase in the memory consumption and computing resources.To tackle this issue, [24,28] proposed octree representation to improve efficiency of network and reduce computing resources.In addition, [6,30] proposed sparse convolutional operations to process spatially-sparse 3D point clouds and achieved impressive results.Although these methods try to alleviate the efficiency problem, they are much more complex than volumetric CNNs and can not fundamentally solve the memory consumption problem.
Multi-view Methods
Another common method for 3D point clouds are a multi-view representation.In recent years, Convolutional Neural Networks have been proved successful in a wide range of 2D visual tasks.To sufficiently take advantage of the strong extraction capability of classical CNNs, 3D point clouds are first projected into multiple pre-defined views, which are then processed by well-designed image-based CNNs to extract features, such as [2,26,[33][34][35][36][37].Specifically, Guerry et al.
[37] used 3D-coherent synthesis of scene observations and mixed them in a multi-view framework for 3D labeling.Su et al. [33] presented a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance.Dai et al.
[2] encoded the sparse 3D point clouds with a compact multi-view representation, including bird's eye view and front view as well as RGB image to perform high-accuracy 3D object detection.You et al. [36] proposed PVNet to integrate both the point cloud and the multi-view data towards joint 3D shape recognition.Although the multi-view representation of point cloud data is reasonable, the project process from 3D to 2D will loss the full utilization of 3D geometric information.
Graph Convolution Methods
Graph structure is a native representation of irregular data, such as 3D point clouds, which offers a compact yet rich representation of contextual relationships between points of different object parts [19,20,[38][39][40][41].Specifically, Bruna et al. [38] proposed two constructions based on a hierarchical clustering of the domain and the spectrum of the graph Laplacian, to prove that for low-dimensional graphs, it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.Wang et al. [39] operated spectral graph convolution on a local graph, combined with a novel graph pooling strategy to augment the relative layout of neighboring points as well as their features.Te et al. [40] treated features of points in a point cloud as signals on graph, and defined the convolution over graph by Chebyshev polynomial approximation leveraging on spectral graph theory.They also designed a graph-signal smoothness prior in the loss function to regularize the learning process.Although the graph convolutional methods have achieved significant performance, these methods constructed on Laplacian matrix, is computationally complex for Laplacian eigen-decomposition and has a large quantity of parameters to express the convolutional filters while lacks spatial localization.
Point clouds Methods
Point clouds are an intuitive, memory-efficient 3D representation which is well-suited for representing geometric details.How to apply deep learning techniques in point clouds directly, simply and efficiently is a critical problem.To address this challenge, Qi et al. [3] designed a novel type of neural network PointNet that directly consumes point clouds and well respects the permutation invariance of points in the input.More specifically, they solved the disorder problem of the point clouds through max pooling and maintained the rotation invariance through the spatial transformation network STN.
The extracted features of each point are the combination of its own information and the global information.PointNet has been proved efficient in many applications ranging from object classification, part segmentation, object detection to scene semantic parsing.However, PointNet only relies on the max-pooling layer to learn global features and does not consider local relationships.Therefore, a series of works [4,5,8,42,43] were developed through investigations of the local context and hierarchical learning structures.Typically, Qi et al. [4] proposed PointNet++ based on their previous work PointNet, which utilizes pointnet as a local feature extraction module to operate hierarchical feature extraction like CNNs, and finally uses upsampling to generate the final high level features.Li et al. [43] proposed PointCNN which uses MLP to learn a transformation matrix to solve the disorder problem of point cloud, and then utilizes the introduced x-conv module to perform convolution on the transformed features.This method achieved similar performance as PointNet++. statistics Fig. 2: An illustration of our MSA module.First, we create multiscale semantic association map Ms by statistics with ball query upon all the training 3D scenes.For a point i in the semantic prediction result, we also generate a vector Pcorr(i) indicating the probabilities of different categories about surrounding points with ball query.
Then we calculate the similarity between this vector and each line (category) in the Ms and normalize it as a probability vector, the detail of the calculation SP is formulated in equation 7. The final prediction for each point is the fusion of original predict probability and the fine-tuned probability, as formulated in equation 8.
3D semantic and instance segmentation
Recent advances in learning-based techniques have also led to various cutting-edge 3D semantic and instance segmentation approaches [1-8, 19, 20, 26, 40, 43-45].Volumetric representation has been adapted by [1,6] to transfer 3D point clouds to regular grids and operate CNNs to extract features.[19,20,40] utilized graph convolutional networks to model the relationships of 3D points which offers a compact yet rich representation of context.[2,26] transfered 3D point clouds into multiple views to sufficiently take advantage of the strong extraction capability of classical CNNs.[3][4][5]7] presented more efficient and flexible ways to utilize MLP directly upon point clouds and well respect the permutation invariance of points.[43][44][45] operated segmentation task by designing novel CNNs on point clouds, Huang et al. [8] and Ye et al. [9] proposed new approaches by slicing the point clouds and utilizing recurrent neural networks to exploit the inherent contextual features.3D instance segmentation is a relatively new research area and attracts more and more attention [11][12][13].Specifically, Lahoud et al. [12] proposed a network based on 3D voxel grids, which treats the instance segmentation task as multi-task learning problem.The network generates abstract feature embeddings for voxels and estimates instances' centers to learn instance information.Yang et al. [11] introduced a framework which simultaneously generates 3D bounding boxes and predicts the binary masks for the points within each box in one stage.Recently, Wang et al. [14] have opened up a framework by jointly operating semantic and instance segmentation in 3D point clouds.Inspired by the proposal mechanism in 2D FasterRcnn [46], they proposed similarity matrix indicating the similarity between each pair of points in embedded feature space to predict point grouping proposals, then the network will predict corresponding semantic class for each proposal to generate the final semantic-instance results.Although the similarity matrix is effective and natural to indicate the proposals, it will generate a large and inefficient matrix which suffers from the heavy computation and memory consumes.Some followed proposal methods [10,18] were proposed to boost the performance of similar framework while still depended on two-stage procedure and the time-consuming non-maximum suppression algorithm.More recently, [15,16] utilized clustering algorithm to divide points into different objects, which was demonstrated to be more effective and efficient than proposal methods.Nevertheless, they did not consider whether the convergence direction of the training process is coupled with the orientation of clustering algorithm.In addition, different points have various diffculties to be devided into distinct objects, which is rarely considered.In this work, we propose a framework which take this critical problem into consideration and prove that it is significant and effective.
Proposed Method
In this section, we first introduce the baseline framework of our network which jointly perform semantic and instance segmentation tasks.Then we give the details of our MSA module for semantic segmentation in Sec 3.2, as depicted in Figure 2. Next, we expound our SPCO module for instance segmentation in Sec 3.3, as shown in Figure 3.The whole framework of our method can be seen in Figure 1.Finally, the adaptive Water Filling Sampling (WFS) algorithm is explained in details in Sec 3.4.
Baseline Framework
As depicted in Figure 1, the network without MSA and replaced SPCO with normal clustering is the baseline framework.First, point clouds of size N are encoded into a high-dimensional feature matrix F share ∈ R N ×D by the encoder PointNet++ [4].Next, two tasks separately decode F share for their own missions.In the semantic segmentation branch, F share is decoded into the semantic feature matrix Fsem ∈ R N ×D and then outputs the semantic predictions Psem ∈ R N ×C , where C is the semantic class number.The instance segmentation branch decodes F share into the instance feature matrix F ins ∈ R N ×D , which is utilized to predict the per-point instance embeddings E ins ∈ R N ×E , where E denotes the length of the output embedding dimensions.These embeddings are used to calculate the distances among the points for instance clustering.
During the training process, the semantic branch is supervised by cross entropy loss while the loss function for instance segmentation, inspired by [15], is formulate as follows: where the goal of L in is to pull the embeddings toward the mean embedding of the points in the instance, while L out guides the mean embedding of instances to repel each other.We denote Lreg as a regularization term that bounds the embedding values.The three loss terms are denoted as: where I represents the number of ground-truth instances; N i is the number of points in instance i; τ i denotes the mean embedding of For inference, we use mean-shift clustering [47] on the instance embeddings to obtain the final instance labels following [15] .The mode of the semantic labels for the points within the same instance is assigned as the predicted semantic class.
Multi-scale Semantic Association Module
In 3D semantic segmentation, for a point of an object, the categories of surrounding points are usually related to the category of the point itself, i.e., the spatial distribution of the semantic information has a strong association as the ensample in Sec 1, which can be further exploited.Thus, based on the semantic context information, we propose our Multi-scale Semantic Association (MSA) Module, which can be seen in Figure 2. As shown in Figure 2, on the one hand, we create multi-scale semantic association maps by statistics with ball query upon all the training 3D scenes, Ms ∈ R C×C means the map in scale s, C is the number of class.On the other hand, based on the decoded semantic output feature Psem, we can also generate the probabilities P s corr of the categories from surrounding points with ball query in scale s.Then for each point i in P s corr , we calculate the distance between P s corr (i) and each line in Ms, and transfer the result as a probability vector for this point, where the larger a bit is, the higher the probability for this point belonging to corresponding category is.Note that the MSA module will generate multiple probability vectors because of multiple scales and these probabilities are only come from surrounding points.At last, the original predicted probability vector is added by the multiple probability vectors to get the final prediction.The formula is described as equation ( 5)-( 8) P out = Psem + α 1 P s1 surr + α 2 P s2 surr + α 3 P s3 surr (8) where o means one hot operation, |B s i | means the number of points in the ball query of point i in scale s, Ms means the semantic association map in scale s, σ means normalization and φ means softmax operation.Note that P s corr (i) ∈ R 1×C and Ms ∈ R C×C can be operated with broadcast mechanism, and || • || 2 is operated in axis 1.The final probability output is the sum of Psem and Pcorr in different scales with different coefficients.In our experiment, we set s 1 , s 2 , s 3 equal to radius 0.2, 0.3, 0.5 m and α 1 , α 2 , α 3 equal to 0.5, 0.3, 0.2 respectively.
Salient Point Clustering Optimization Module
As explained in the baseline framework, for instance segmentation, the goal of L in is to pull the embeddings toward the mean embedding of the points from the same object, while L out guides the mean embedding of instances to repel each other in the training process.In the inference time, mean shift clustering algorithm is utilized to distinguish points of different objects.However, the coupling between the convergence orientations in training and the clustering orientation in inference is not taken into consideration.In addition, the points from the same object have different difficulties in instance segmentation as the ensample in Sec 1.Thus, in this paper, we propose a Salient Point Clustering Optimization (SPCO) module, which takes mean shift clustering algorithm into the training process and saliently focuses on the points that are harder to be distinguished in the clustering process.More specifically, as shown in Figure 3, mean shift clustering algorithm is operated in training process to simulate the clustering procedure in inference.Then for the points clustered in one instance while are not belonging to this instance according to the ground truth, we generate an additional loss L cluster to repel these embeddings away from the mean embedding of this instance.The loss L cluster is formulated in equation ( 9), note that the ID of the clustered instance is decided by the mode of ID in the ground truth, and to converge on a reliable model, we add L cluster into the training process from 10 epochs.
L ins = L base + L cluster (10) where Nc means the number of instances in clustering, |W i | means the number of wrong clustered points in instance i, E i j means the embedding of jth wrong clustered point in instance i and E i means the mean embedding of the correct clustered points in instance i.
Equipped with our SPCO module, the network can simulate the clustering procedure in inference more realistically, and pay more attention to the points that are easy to be erroneously clustered, which is significant for improving the performance of instance segmentation.
Water Filling Sampling algorithm
In indoor scenes, there exists some inherent structures.For example, the space is always surrounded by walls and floors.When we sample point clouds from indoor scenes, points of certain categories will occupy the main proportion, which will cause serious imbalance problem between these main categories and other normal categories, especially for tiny objects.In previous works of points segmentation task, this problem is rarely discussed.Therefore, in this paper, a Water Filling Sampling algorithm is proposed to solve the imbalance problem in indoor scenes, which is adaptive to different category distribution.Specifically, for a point cloud of a scene, we first cut it into blocks along X-Y plane and store corresponding semantic and instance labels for each point in the blocks.In addition, we define an accumulative vector V B ∈ R 1×C to store the block number for each category, and generate a list SemB[i] to indicate which block contains points of category i.If the number of points in a block that belongs to category i is larger than a thresh t, the block index will be contained in SemB[i] and V B[i] will be added by 1.When we accomplished the cutting step, we can get the probabilities of block number for each category from V B. To keep the balance among the categories, we need to sample the same size of blocks from all the blocks with different probabilities.If the original probability of a category is high in the row data, the sample probability should be correspondingly low.To achieve this goal, we gradually add a small probability value δ to the category with the minimum sum of original probability and current sampling probability, until the sum of the total sampling probability values up to 1.The process is likely to fill water to the canyon consisting of original probabilities of all the categories, the details of the algorithm can be formulated as Algorithm 1.
As for part segmentation datasets, such as ShapeNet, the algorithm becomes more concise because we can obtain SemB and V B for each object directly and skip the cutting step.Note that because of the characteristic of part segmentation dataset, we perform W F S algorithm on super categories.
Experiments
In this part, we will compare our method with other SOTA methods in 3D point clouds semantic and instance segmentation tasks to demonstrate that our method is effective and robust on different kind of datasets, including large scale indoor 3D dataset and part segmentation 3D dataset.Cut S i into blocks B i along X-Y plane.
3:
In each block, random sample N p points with labels.
4:
for B i j in all the blocks B i do 5: SLab i j ← Separate out corresponding labels.
6:
for c in range [0, N c − 1] do 7: if pc > t then 9: 10: end for 13: Datasets.Followed as [15], we conduct the experiments on two benchmark datasets: Stanford 3D Indoor Semantics Dataset (S3DIS) [48] and ShapeNet part segmentation Dataset [49].The specific introduction of these datasets is as follows: • S3DIS is a real 3D point cloud dataset generated by Matterport Scanners for indoor spaces, which contains 6 areas and 272 rooms.Each point contains 9 dimensions for the input feature including XY Z, RGB and normalized coordinates.For each point, an instance ID and a semantic category ID with 13 classes are annotated.Following [3], we split the rooms into 1 m × 1 m overlapped blocks with stride 0.5 m along the X-Y plane and sample 4096 points from each block.
• ShapeNet dataset is a synthetic scene mesh for part segmentation, which consists of 16881 shape models from 16 categories.Each object is annotated with 2 to 5 parts from 50 different subcategories.We utilize the instance annotations generated by [14] as the ground-truth labels and we sample 2048 points for each shape during training followed as [3].We split the dataset into training and validation followed [15] and 3-dimensional vector including XY Z is fed into our network as input.
Details.For instance segmentation, we trained SASO with λ = 0.001.We use five output embeddings following [15] and set α to 0.01.We select the Adam optimizer to optimize the network on a single GPU (Tesla P100) and set the momentum to 0.9 for the training process.During the inference process, we set the bandwidth to 0.6 for mean-shift clustering and apply the BlockMerging algorithm Evaluation.Following [15], we evaluate the experimental results in the following metrics.For semantic segmentation, we calculate the overall accuracy (oAcc), mean accuracy (mAcc) and mean IoU (mIoU ) across all the semantic classes along with the detailed scores of the per-class IoU .To evaluate the performance of instance segmentation, we use the coverage (Cov) and weighted coverage (W Cov) [50][51][52].Cov is the average instance-wise IoU of the prediction matched with ground truth, and W Cov is the Cov score after being weighted by the size of ground truth.For the predicted regions P and the ground-truth regions G, Cov and W Cov are defined as: WCov(G, P) where |r G i | is the number of points in ground-truth region i.We also measure the classical metrics of mean precision (mP rec) and mean recall (mRec) with an IoU threshold of 0.5.
S3DIS Evaluation
We conduct the experiments on the S3DIS dataset with the backbone networks PointNet++.We train the network for 50 epochs with a batch size of 12, the initial learning rate is set to 0.001 and divided by 2 every 300 k iterations.
Quantitative Results.For classical Area5 validation scenes, the quantitative results of SASO in instance and semantic segmentation tasks are shown in Table 1.As we can see, SASO achieves 51.9 mW Cov and 59.5 mP rec, which dramatically outperforms the state-of-the-art method 3D-BoNet [11] by 7.3 in mW Cov and 1.9 in mP rec.As for semantic segmentation, our method significantly improves the mAcc and mIoU by 2.6 and 2.1 respectively, compared with advanced ASIS [15].For a more comprehensive comparison, we evaluate our method with 6 fold cross validation on S3DIS dataset.As shown in the table, our method achieves 58.3 mW Cov and 72.8 mAcc, which significantly outperforms the stateof-the-art methods by a large margin.The stable improvement in both semantic and instance segmentation demonstrates the effectiveness of our method.For a more detailed comparison with our
Point Clouds
Sem.Res Ins.Res Sem.GT Ins.GT baseline framework and ASIS [15], Table 3 shows the results for specific categories in both instance and semantic segmentation based on Area5 scene in S3DIS.Note that for a fair comparison, we reproduce the result of ASIS [15] with PointNet++ backbone using the author's code to get the per class results.
Qualitative Results.To intuitively present our results, we visualize the predict results and annotations on point clouds, as shown in Figure 4.For instance segmentation, different colors represent different instances.For semantic segmentation, each color refers to a particular category.It is obvious that our method has a great performance, especially at the boundaries of different objects.
Ablation Study.The ablation study results are shown in instance segmentation branch can be beneficial to semantic segmentation branch.When we add M SA module to the baseline, we can find that the semantic segmentation results are improved with 1.7 in mAcc and 1.2 in mIoU .With the WFS algorithm added to the baseline framework, we obtain 3.4 gains in mP rec and 1.3 gains in mIoU , which means the balance among different categories is critical to both two tasks.Finally, compared with the baseline framework, our full method has a dramatic improvement in both two tasks, including 7.6 mP rec gains in instance segmentation task and 3.5 mIoU gains in the semantic segmentation task.
Consumption of memory and time.Table 4 shows a comparison of the memory cost and computation time.For a fair comparison, we conducted the experiments in the same environment, including the same GPU (GTX 1080), batch size (4) and data (Area5 including 68 rooms).Note that all the time units are minutes, and all the memory units are MB.In the training process, the result is the time and memory required for one epoch.As we can see, our method needs relatively more time for training because we introduce clustering into training process, while costs little memory because of the brief but efficient architecture.In the inference process, the results show the resource consumption for Area5.Our approach takes only 373 MB and needs 40.4 minutes while acquires better performance, which is significantly faster and more efficient than the state-of-the-art methods.
ShapeNet Evaluation
We also validate our method on part segmentation dataset ShapeNet, the semantic annotations are publicly available while the instance segmentation annotations are the generated results as [14].Because of the deficiency of ground truth for instance annotations, we only provide the qualitative results for instance segmentation in Figure 5 as [15].Four lines from top to bottom in Figure 5 mean semantic segmentation results, semantic annotations, instance segmentation results and instance annotations respectively.As we can see, different parts in the same object are well grouped into individual instances, especially the boundaries of different parts.The semantic segmentation results are exhibited in Table 5.Our approach obviously boosts the result upon baseline framework by 2.9 mIoU and outperforms the state-of-the-art method ASIS [15], PointConv [45] and SSCN [6].These results reveal that our proposed method also has the capability to improve the part segmentation performance.
To prove the effectiveness of our WFS algorithm intuitively, we show the sampling probability and the corresponding improvement for different categories, as depicted in Figure 6.In the upper graph (a), the orange color means the original frequency of different categories in the training dataset, the blue color represents the sampling probabilities for different categories.We can find that the distribution of different categories is more balanced with our WFS algorithm.The second graph (b) shows the improvement for different categories, the orange color means positive boost while the purple color represents negative influence.For the categories with low frequency existing in the raw data, the corresponding improvements are obvious, while for the categories with high frequency, the results are rarely influenced.It demonstrates that our WFS algorithm is effective and critical for alleviating the imbalance problem.
Conclusion
In this paper, we propose a novel framework which jointly performs semantic and instance segmentation.For the instance segmentation task, a module named SPCO is proposed to introduce clustering into the training process and saliently focus on the points that are harder to be distinguished in the clustering process.For the semantic segmentation branch, we introduce MSA module based on the statistic knowledge to exploit the potential association of spatial semantic distribution.In addition, we propose a Water Filling Sampling algorithm to address the imbalance problem of category distribution.Qualitative and quantitative experiment results on challenging benchmark datasets demonstrate the effectiveness and robustness of our method.
Fig. 1 :
Fig. 1: An illustration of our joint learning framework.The input 3D point clouds are first encoded to F share by PointNet++[4], then the common feature will be decoded separately by semantic and instance segmentation branches.In semantic segmentation branch (blue), a MSA module based on statistics knowledge is proposed to explore the semantic association, we will expound it in Sec 3.2.For the instance segmentation (green), we proposed SPCO module to introduce clustering into the training process and focus on hard-distinguished points, which will be explained in Sec 3.3.
Fig. 3 :
Fig. 3: An illustration of our SPCO module.As shown in the first line, the different points of one object have different difficulties to be distinguished, especially for the points of the joints among different objects.In terms of this problem, we introduce clustering into training procedure and saliently focus on the points that are harder to be distinguished in the clustering process.
instance i; f j is an embedding of a point; ζv and ζ d indicate margins for the variance and distance loss respectively; ia and i b represent different instances; [x] + =max(0, x) is the hinge function; and the l 1 distance is represented by || • || 1 .
Algorithm 1
Details of Water Filling Sampling algorithm (W F S) Input: Training point clouds of all the scenes S with corresponding semantic-instance labels, and a series of parameters, including threshold t, number of points for each block N p and number of categories N c.Output: All the balanced blocks Blk with corresponding semantic labels SLab and instance labels ILab.initialization: SemB = [[]] * N c,V B = [0] * N c,SP = [0] * N c, SP B = [],B = [],Ω = 0,δ = 0.0001 1: for S i in all the scenes S do 2:
Fig. 4 :
Fig. 4: Qualitative results of our method on the S3DIS dataset.For semantic results, each color refers to a particular category and for instance results, different colors represent different objects.
Fig. 5 :Fig. 6 :
Fig. 5: Qualitative results for semantic and instance segmentation on ShapeNet dataset.that the semantic segmentation results are also improved with this module, we think this is because the semantic and instance segmentation tasks share the shallow features, the improvement in the
Table 1
Semantic (green) and instance (red) segmentation results on S3DIS.
Table 2
Ablation study on the S3DIS dataset in Area5.
Table 3
Per class results on the S3DIS dataset.
Table 4
Comparisons of computation time, GPU memory and performance.
Table 2 .
Equipped with different modules of our method upon the baseline framework, we can find that with our SP CO module, we obtain 3.2 gains in mW Cov and 4.1 gains in mP rec.It is interesting | 7,996.8 | 2020-06-25T00:00:00.000 | [
"Computer Science"
] |
Design and Implementation of a Vehicle Social Enabler Based on Social Internet of Things
In recent years, the combination of novel context-aware systems with the Internet of Things (IoT) has received great attention with the advances in network and context-awareness technologies. Various context-aware consumer electronics based on IoT for intelligent and personalized user-centric services have been introduced. However, although the paradigm of the IoT has evolved from smart objects into social objects, the existing context-aware systems have not reflected the changes in these paradigms well. Therefore, this paper proposes a social enabler (S-Enabler) in order to overcome this limitation. The S-Enabler plays an important role in converting the existing objects into social objects.This paper presents themiddleware architecture and cooperation processes for a social IoT-based smart system. In this paper, the S-Enabler is designed to be applied to a vehicle and an energy saving service is introduced by using the S-Enabler. The proposed energy saving service can reduce energy consumption and fuel consumption based on social behaviors such as sharing or competition. The performance of the S-Enabler is discussed through a simple vehicle service scenario. The experimental results show that the S-Enabler reduced fuel consumption by up to 31.7%.
Introduction
With the development of context-awareness technology, various spaces, such as the home or the office, are becoming smart spaces [1][2][3].In smart spaces, for example, various sensors gather the environmental and situational context, and smart systems recognize the user's situations or preferences, and various service providers then serve personalized services to users.Recently, such context-aware systems have been combined with the Internet of Things (IoT).
The IoT is a new ICT (information and communications technology) paradigm, and this paradigm shift is attributed to the advanced connectivity of various things, such as systems, devices, and services.That is, things are able to communicate with each other and provide smart services autonomously.Various IoT-based applications and services have begun to emerge in various areas such as home automation, healthcare, vehicle management, and energy management [4][5][6][7][8][9].In the latest consumer product exhibition, various consumer electronics based on IoT that have been newly released also reflect this technical trend.
IoT is evolving continuously, as shown in Figure 1.The early IoT focused on improved interoperability and connectivity of objects (things).These objects are connected objects.In this stage, the standard of communication between objects, network convergence, and the address management of various objects were important issues.The current IoT is focused on interactivity with the surrounding context.In other words, various objects provide intelligent services to users through the interaction with the surrounding context.These objects are smart objects.Recently released smart consumer electronics are types of smart objects with the ability to cooperate with other smart devices and recognize surrounding environments in order to provide smart services.The next step of the IoT is the object socialization.In this stage, the objects can configure their social network themselves and provide more advanced services through the social behavior, such as competition, collaboration, and sharing.These objects are social objects.This paper focuses on a social object and proposes the IoT-based LED car enabler, named a social enabler (S-Enabler), to use connected objects or smart objects as social objects in vehicle environments.Environmental issues, such as climate change, are threatening humanity.Fossil fuels are being depleted because of a sharp increase in energy consumption.Some environmental experts expect that fossil fuels will be exhausted completely in the not-too-distant future.For this reason, Green ICT has become increasingly essential in recent years.Therefore, this paper presents an energy saving scheme using the S-Enabler.In other words, the proposed energy saving scheme reduces the fuel consumption of a car through social behavior, such as competition, collaboration, and sharing.The S-Enabler has the following features: (i) Smart Service Provision Based on Social Behavior in the Social IoT.In recent years, various consumer devices have been equipped with microprocessors and network transceivers.These devices are part of the IoT and can interact with other devices and provide services to users autonomously, which is called the smart IoT.Furthermore, as described above, the paradigm of the IoT has changed from smart IoT to social IoT.Therefore, the S-Enabler is designed to cooperate with networked devices and utilizes a short-range wireless communication technology, particularly Bluetooth, for establishing the social IoT.This paper also introduces an energy saving service using the S-Enabler based on the social IoT.
(ii) Management of Multiobjects and Multiroles.There were many changes in the role of the objects in the IoT.In the past, the object mainly had onerole.This paper calls it one-object one-role (O2OR).For example, a mobile phone was used for communication between people and a sensor was used for gathering environmental information.In recent years, the object has evolved into a multirole-object.
That is, the object has various roles.This paper calls it one-object multirole (O2MR).For example, a modern smartphone has various functions, such as communications, social network service (SNS), camera, messenger, and sensor.In the future, the object will have various roles.Furthermore, the object will be able to create various functions by cooperating with other objects.This paper calls it multiobject multirole (MOMR).Therefore, this paper considers this new paradigm of IoT.
(iii) Dynamic Configuration of a User-Centric Service Domain.An important feature of the smart space is to configure a service domain around the users.Currently, most spaces have a particular purpose.A user enters into the space in order to receive the particular service.For example, a user exercises in a gym, drives in a car, and works in an office.However, in the future, the convergence of services and spaces will further accelerate.For example, it is possible to receive healthcare services in a car.It is also possible to process work at home.Therefore, the S-Enabler is utilized for configuring the user-centric service domain through the fusion of services and spaces.
The rest of this paper is organized as follows: Section 2 will describe related works of the IoT.This section will describe various technologies related to the IoT and smart services based on IoT.Section 3 will present the background and paradigm of the IoT.Section 4 will present the system architecture and core technologies.Section 5 will present the implementation of the proposed system with the hardware architecture and smartphone application.Section 6 discusses service scenarios and presents some experiments on energy saving.Finally, the conclusion will be given in Section 7.
Related Works
2.1.Enabling Technologies for IoT.Recently, enabling technologies for IoT have been widely studied around the world.Cirani et al. [10] proposed a scalable and self-configuring architecture for the IoT.This architecture aimed to provide automated service and resource discovery mechanisms with no human intervention for configuration.Perumal et al. [11] proposed an interoperability framework for the implementation of a smart home based on heterogeneous home networks.This framework utilized the simple object access protocol (SOAP) technology for providing platformindependent interoperation among heterogeneous systems.Wu and Fu [12] proposed a framework in order to improve interactivity between humans and systems.Özc ¸elebi et al.
[1] suggested a lightweight system architecture for discovery, monitoring, and management of the objects that form the smart space, such as nodes, services, and resources.In particular, this system architecture was utilized for the limited environment in which the objects had low resource capacity.
Applications and Services Based on
IoT.The topic of development of applications and services based on IoT has been widely studied.Islam et al. [4] reviewed IoT-based healthcare technologies, including network architectures, platforms, and applications, and also presented industrial trends in IoT-based healthcare solutions.Kelly et al. [5] discussed effective implementation for IoT used for environmental condition monitoring in homes based on a low-cost ubiquitous sensing system.Li et al. [6] presented an IoT application in the form of a smart community with cooperating objects (Neighborhood Watch and Pervasive Healthcare).Li and Yu [7] presented the design of a smart home system based on IoT and service component technologies.Chong et al. [8] analyzed the characteristics of a smart home system and designed and implemented a smart home system based on the IoT for the flexible and convenient control of a home system.
Social IoT.
Recently, there have been a lot of research activities integrating social networking concepts into the IoT.Atzori et al. [13] presented the concept, architecture, and network characterization of the social IoT.Their paper proposed the policies for the establishment and the management of social relationships between objects.A possible architecture for the social IoT was also proposed.Mendes [14] presented social-driven Internet of connected objects.The paper discussed the technology for ensuring an efficient interaction among the physical, virtual, and social world.Nitti et al. [15] proposed a subjective model for trustworthiness evaluation based on the social Internet of Things.This model was used for computing the trustworthiness of its neighbors.Guo et al. [16] proposed the opportunistic IoT, which improved the harmonious interaction among humans, society, and smart objects.Their paper presented the innovative application areas and discussed the challenges caused by this new computing paradigm.
As described above, the IoT has been extensively studied.However, the study in this current paper differs in two aspects from the previous works.First, this study applies the new ICT paradigm to design and implement the proposed system.Second, this study presents various smart services in a car through cooperation in the vehicle social network.
Paradigm of the Internet of Things
The IoT is a new networking paradigm in which a variety of things (e.g., networked devices, sensors, and actuators) become an integral part of the Internet.In other words, the various things become "smart things" that are equipped with microprocessors and network transceivers, which enable them to communicate with each other and provide smart services autonomously.Various products and service applications of the IoT have started to be released in various areas, such as home automation, energy management, healthcare, and vehicle management.The important features of an IoTbased smart system can be summarized as follows: (i) First, it improves the capability of collaboration: the IoT-based smart systems can create/provide information and services through collaboration with other smart systems.This is possible owing to intense interactions among smart systems.The IoT-based systems consist of different and heterogeneous objects that can communicate with each other transparently and seamlessly.(ii) Second, improved situation awareness ability is also an important characteristic: IoT-based smart systems often aim at enhanced recognition of the surrounding environments compared to the existing context-aware systems.With the changes in network paradigms to IoT, the capability of situation awareness can be greatly improved.(iii) Lastly, it can provide enhanced service quality, such as guaranteeing the quality of experience (QoE): information collected from various objects in the IoT forms big data and it is possible to provide user-centric services by utilizing this big data.It is impossible to communicate with each other and create/provide intelligent services for users autonomously unless the components of the IoT are smart objects.These objects can perform various functions to create/provide intelligent services through the interaction with various objects.The IoT has the important ability to support novel applications and services based on cooperation between objects in more effective and efficient ways.
The proposed system in this paper, S-Enabler, is designed and implemented by applying IoT-enabling and IoT application technologies.This paper also introduces a social IoT application, an energy saving vehicle (ESV) that reduces the fuel consumption through cooperation between smart devices.The S-Enabler was designed to implement social service domains.Figure 2 3 shows the middleware architecture for the proposed system.The middleware architecture consists of six types of management modules according to the functions of the S-Enabler.The six types of management modules are the context management module for recognizing the surrounding environment, the cooperation management module for collaboration between the smart devices, the social management module for establishment of a safe and reliable vehicle social network (e.g., discovering the objects and assessing object reputations), the session management module for configuring a seamless service domain, the service management module for service creation and decisions, and the knowledge repository for storing various pieces of information required for operating as a social object.The detailed description of the middleware architecture is as follows.
Cooperation Management Module (CooM) .
The CooM performs the role of enabling autonomous collaboration between various smart devices.The CooM consists of the connection manager, data requester, and security manager.The connection manager manages the connection between a variety of smart devices and the data requester performs the role of periodically requesting information from smart devices upon the occurrence of a specific event.Lastly, the security manager performs the role of enabling a safe and reliable connection between the smart devices and the S-Enabler.To achieve this, the CooM performs the role of security capability and requirement management, key management, and the authentication and authorization of smart devices.
Context Management Module (ConM).
The ConM manages the collected information and performs the role of recognizing the surrounding environment.In addition, this module plays an important role of generating patterns through analyzing the collected information.The ConM is composed of a context analyzer, the context manager, and the pattern generator.Various data entering the ConM are utilized as information for recognizing the surrounding environment in the context analyzer.Particularly in this paper, recognition of the current status of users will be introduced.Collected information performs the role of either updating the stored information in the knowledge repository (KR) by the context manager, storing it in the KR, or discarding.Lastly, the pattern generator performs the role of generating patterns by analyzing the collected situational information.Such patterns can be utilized for providing the predictionbased services.
Social Management Module (SocM).
The SocM performs the role of establishing the social networks.The role includes object discovery and object reliability verification in order to implement a safe and reliable vehicle social network.The SocM consists of an object discovery manager, object verifier, and object manager.The object discovery manager performs the role of searching for the objects that will have a meaningful relationship with itself.The object verifier inspects whether the found social object is trustworthy and performs a reputation assessment to improve the quality of the social service.The object manager performs the role of registering, authenticating, and authorizing the objects.
Session Management Module (SesM).
The SesM is required to provide a user-centered service.The SesM has two major functions.The first is to provide a user-centric service and the second is to provide a seamless service.The SesM uses the session ID and user ID to provide the above two services.For example, when the service is interrupted, the SesM stores the session ID and user ID with all the related contexts, such as location, service information, and content information.By using the session ID and user ID, the S-Enabler can restart the service at the interrupted point.The policy controller manages various policies and rules.The policies and rules are decided according to the situation.For example, when more than two users enter the same service domain, the priority of the service provision is determined according to predefined policies.In addition, when the user moves into different service domains, the policy controller requests the cloud server to send the relevant policies and rules.The configuration controller organizes a new service environment according to the changes in the situation.
Service Management Module (SerM).
The SerM performs the role of inferring the service based on the context.Through this inference, the SerM configures and manages the service.The SerM is composed of an inference engine, decision manager, and service manager.The inference engine performs the role of inferring the service candidates from a variety of service applications.Then, the most appropriate service is determined by the decision manager.The decision manager correlates the current situation with the service candidates.The service manager performs the role of registering, deleting, and managing the services, also known as convergence.
Knowledge Repository (KR).
The KR manages the use of a database.The KR is a set of components that manages the context, rules and policies, and patterns.The KR also controls the information ontology, service-pattern-look-up table, context, and rules and policies.The KR autonomously updates the database when a new situational event occurs.Furthermore, The KR autonomously modifies the policies and rules to improve the service maintenance and the pattern management efficiency.
Social Cooperation Diagram for Smart Service Provision.
The S-Enabler can configure vehicle social networks to provide the user with smart services based on social behavior.In this paper, the social networks refer to the networks that consist of social objects.More specifically, we refer to the social networks as the social interaction among the S-Enablers.We proposed a new vehicle social network based on the smartphone application for smart social services.The S-Enabler can create a variety of useful smart services by utilizing the user's social behaviors based on the vehicle social network.The S-Enabler plays the role of enabling the application, recognizing the user and social context, learning the pattern, and creating the service.In other words, the S-Enabler has the role of implementing smart service domains based on social behavior.For this, the S-Enabler has three main social cooperation processes (initial connection process, social cooperation process, and service creation process).The ultimate goal of the S-Enabler is to voluntarily cooperate with surrounding smart devices in order to socialize the smart devices.As explained above, the paradigm of the IoT is changing from smart objects to social objects.Thus, the goal of the S-Enabler is to change the smart devices (objects) into social devices (objects).Therefore, the S-Enabler supports the wireless technology to cooperate with surrounding smart devices.This paper uses Bluetooth technology for wireless communications.An advantage of Bluetooth is that it has very low power consumption and recent smart devices, such as smartphones and smart wearable devices, have a Bluetooth transceiver module for wireless communications by default.
Initial Connection
Process with a Smart Device.Figure 4 shows the initial connection processes between the S-Enabler and a smart device.Initially, when the car starts, the S-Enabler periodically broadcasts an advertising packet.If the advertising packet is received, the smart device sends an initiation request packet.If the initiation request packet is received, the S-Enabler sends an initiation response packet after the authentication process.Then, the S-Enabler initially connects with the smart device.Managing energy consumption is a very important issue because the periodical signal transmission consumes a lot of energy and smart devices, such as smartphones and smart watches, commonly operate by battery.In this study, two options are offered to reduce the energy consumption used in an initiation process.First, the user can make the Bluetooth interface enabled manually.Second, the smart device with our smartphone application utilizes the schedule patterns of the user to decide whether to enable or disable the Bluetooth interface.From the user's schedule pattern, the smart device derives the probability that the user is riding in a vehicle.Then, the smart device determines the enabling/disabling cycle of the Bluetooth interface on the basis of the derived probability.
Social Cooperation Process.
Humans take on various roles according to the situation.A man, for example, has various roles such as a father, a son, an employee of a company, or a member of an interest group.These relations sometimes have a hierarchal structure and sometimes have an egalitarian structure.This study focuses on these characteristics of the human relations and the S-Enabler is designed and implemented considering these social features.Figure 5 illustrates the sequence of the social cooperation process and the service creation process.
(i) Situation Recognition.First, the S-Enabler periodically broadcasts signals to recognize surrounding smart devices and creates logical connections.Then, the S-Enabler receives a schedule of the user from the management server and environmental information, such as time, location, and temperature/humidity, from the smart devices.The ConM (context management module) of the S-Enabler aggregates/analyzes this context, generates the patterns, and stores the context and pattern in the KR.(ii) Social Cooperation.The SocM of the S-Enabler searches for objects (or smart service domains) that have similarity to the local objects.These similarities include the user's characteristics and schedules, the types of smart devices, and environmental information.After being verified by the SocM, the object becomes a member of the vehicle social network.The S-Enabler uses the information that is collected via the configured vehicle social network for decision and creation of the social IoT-based smart services.(iii) Role Decision.The SerM (service management module) of the S-Enabler assigns a role to each device in the social service domain by evaluating schedule and environmental information (time, location, temp/humidity, etc.).If the role overlaps, the SerM adjusts each role to enable the efficient operation.
Service Creation Process Based on Social Behavior.
Because dynamic role decisions can cause a large amount of resource and energy consumption, the S-Enabler assigns the role to each device according to resource sharing in order to reduce energy consumption.The S-Enabler has the role of managing resources of smart devices in the social service domains and periodically updates the shared resource profile.The S-Enabler infers the context and creates service rules based on social behaviors.
Energy Saving Algorithm Based on Social
Behavior.This subsection will introduce an energy saving algorithm based on social behaviors such as competition and sharing.Figure 6 shows the energy saving application based on vehicle social networks.
Step 1.The user first enters the fuel cost that he desires to use for a month (threshold value) and the average fuel consumption per 100 km for the vehicle.Then, the total distance to drive for a month is calculated by using the fuel cost, the fuel consumption per 100 km, and the average price of the gas stations.
Step 2. The recommended driving distance by unit hour can be calculated by ℎ = /( × 24), where ℎ , , and refer to the total distance to be able to drive for a month and days in the th month.
Step 3. The recommended driving distance is compared with the actual driving distance, and the S-Enabler shows the energy saving level through LED lighting.This information can be checked in detail through the user's smartphone application.The actual driving distance of the user used here is calculated through autonomous cooperation between the S-Enabler and the user's smartphone.Bluetooth is used for the establishment of PANs.It is possible to save energy by sharing the driving information, such as driving distance, with other vehicles that have a similarity to the user.
Step 4. The shortcoming of this method is that it does not consider geography, vehicle year, and user's driving style.Therefore, actual fuel costs and driving distance can be used for correcting this error.If there is a credit card that is mainly used for refueling, the fuel cost can be easily collected by the smartphone by using the short message service (SMS) upon making payment.Then, it is possible to obtain more accurate average fuel consumption per 100 km through comparing the actual driving distance and fuel costs.This method cannot be used if a credit card is not used for paying for fuel and in this case the S-Enabler resolves this by establishing vehicle social networks.The S-Enabler corrects the error through comparing the driver's vehicle with other vehicles that have a similarity in the vehicle social network.
By utilizing social behavior, it is possible to resolve the problems in the IoT environments and to create a novel smart service that has high energy efficiency and high user satisfaction, that is, to guarantee the QoE.
Implementation
This study implemented the proposed system, S-Enabler.In this section, technical realization of the S-Enabler is addressed based on the hardware architecture and the feasible smartphone application.Figure 7 illustrates the main features of the prototype of the S-Enabler.
5.1.Hardware Architecture.Figure 8 shows the hardware architecture and prototype of the proposed system.The hardware architecture functionally consists of four parts: the main processor, the network and communication part, the LED and LED driving unit part, and the power management part.This paper utilizes an 8-bit microcontroller as the main CPU.The main processor part has the main reduced energy consumption and cost compared to the previous version of Bluetooth.In addition, currently, the BLE is widely used as a WPAN by various smart devices such as smartphones and wearable devices due to these advantages.Thus, systems that want to cooperate with various smart devices should use the BLE.For these reasons, we used Bluetooth 4.0 with BLE for building a WPAN.Bluetooth technology is designed to establish WPANs with low-cost and low power characteristics.Thus, it is widely used in modern smart devices.For this reason, this paper uses a Bluetooth transceiver module for implementing the system.The LED and LED driving unit part consists of a full-color LED and the LED driving circuit.The power management part consists of a power regulator and a switched-mode power supply (SMPS).
Smartphone Application.
This study developed a smartphone application to increase the convenience for the users.We proposed a new vehicle social network based on the smartphone application for smart social services.The S-Enabler can create a variety of useful smart services by utilizing the user's social behaviors based on the vehicle social network.Figure 9 shows the social IoT-based smartphone application for fuel saving.It is possible to monitor the driving distance like the LED of the S-Enabler.If the user touches the "my car" menu, it highlights in detail the actual distance the user drove, the threshold distance the user entered, the number of days remaining, and the recommended daily driving distance, which enables the user to check in detail his driving history.If the user touches the "friends" menu, there is a ranking in order of the driving status of friends.The amount of cost saving is calculated using the average fuel consumption, the current gas price, and the distance traveled on foot.This plays the role of helping to save fuel further through competition.Finally, if the user touches the "car sharing" menu, it lists the friends who can carpool with the user.This menu analyzes the driving information, such as the collected driving destination of each user and average driving time, to show the list of friends that can carpool.Therefore, the user can send a message to his friend to share a car.In this way, fuel costs can be dramatically reduced.This application enables the user to save energy through competition and sharing, which is a social relationship between people through the S-Enabler.
Experiment and Discussion
It is not easy to measure the average energy savings from the energy management method suggested in this study.This is because the energy management method based on social behavior, such as competition or sharing, reflects user intentions related to energy reduction, which can result in a diverse energy reduction range according to the user.In addition, the energy savings can be perceived as small as time passes owing to psychological factors.Thus, there needs to be a large control group to accurately measure the amount of average energy savings and it is determined that meaningful energy saving results will be derived when the experiment is done over a long-term period (over one year).Therefore, this study first draws its conclusion by discussing the significance of the proposed research with a simple experiment rather than a large-scale experiment.
In the experiment, the energy savings experiment was done over four weeks on five researchers that commute by car.On the first and second weeks, fuel consumption was measured while commuting by car as usual and, on the third and fourth weeks, fuel consumption was measured after installing the suggested system.Since two of the researchers knew that they lived in the same area, they could share a car.These two researchers commuted by car sharing on the third and fourth weeks.To emphasize the social behavior called competition, on the third and fourth weeks 50,000 KRW (about 42 USD) worth of gift cards was given to the user with the highest fuel savings and 30,000 KRW (about 25 USD) worth of gift cards was given to the user with the second highest fuel savings.The threshold fuel amount was decided by the users themselves.The results of the experiment over the four weeks are presented in Figure 10.There was an average saving of 31.7% in the third and fourth weeks.In the results, users 2 and 4 engaged in car sharing and agreed to divide the payment of fuel 50-50.With more car sharing, bigger savings effect will be seen.Although the performed experiment has a limitation, the results are significant and meaningful.
Conclusion
This paper proposed the S-Enabler based on vehicle social networks considering the new paradigm of IoT.The paradigm of IoT has changed from the smart IoT to the social IoT.Therefore, the S-Enabler is designed to cooperate with smart devices based on social behavior and utilize wireless technology, particularly Bluetooth, for establishing the social IoT.This paper presented the middleware architecture and cooperation processes for a social IoT-based smart system and also introduced an energy saving algorithm based on social behavior, such as competition and sharing.This paper designed and implemented the proposed system and performed an experiment to evaluate the performance of energy savings.The S-Enabler reduced total fuel consumption by 31.7%.It is expected that this work will contribute to providing guidance on the design and development of a social IoTbased smart system.
Figure 1 :
Figure 1: Paradigm of the Internet of Things.
4. 1 .
Overview of System Architecture.The suggested system, S-Enabler, is designed reflecting on the novel IoT paradigm.
Figure 3 :
Figure 3: Middleware architecture of the S-Enabler.
Figure 4 :
Figure 4: Sequence diagram of the initial connection process.
Figure 8 :
Figure 8: System implementation; (a) hardware block diagram; and (b) prototype of the S-Enabler.
Figure 9 :
Figure 9: Social IoT-based smartphone application for fuel saving through interaction with the S-Enabler.
Figure 10 :
Figure 10: Results of the energy saving experiment. | 6,731.2 | 2016-05-19T00:00:00.000 | [
"Computer Science"
] |
One- and Two-Color Resonant Photoionization Spectroscopy of Chromium-Doped Helium Nanodroplets
We investigate the photoinduced relaxation dynamics of Cr atoms embedded into superfluid helium nanodroplets. One- and two-color resonant two-photon ionization (1CR2PI and 2CR2PI, respectively) are applied to study the two strong ground state transitions z7P2,3,4° ← a7S3 and y7P2,3,4° ← a7S3. Upon photoexcitation, Cr* atoms are ejected from the droplet in various excited states, as well as paired with helium atoms as Cr*–Hen exciplexes. For the y7P2,3,4° intermediate state, comparison of the two methods reveals that energetically lower states than previously identified are also populated. With 1CR2PI we find that the population of ejected z5P3° states is reduced for increasing droplet size, indicating that population is transferred preferentially to lower states during longer interaction with the droplet. In the 2CR2PI spectra we find evidence for generation of bare Cr atoms in their septet ground state (a7S3) and metastable quintet state (a5S2), which we attribute to a photoinduced fast excitation–relaxation cycle mediated by the droplet. A fraction of Cr atoms in these ground and metastable states is attached to helium atoms, as indicated by blue wings next to bare atom spectral lines. These relaxation channels provide new insight into the interaction of excited transition metal atoms with helium nanodroplets.
■ INTRODUCTION
The advent of helium nanodroplets (He N ) has spawned many new vistas in the field of matrix isolation spectroscopy. 1,2 Various fascinating spectroscopic experiments have been enabled by helium nanodroplet isolation spectroscopy, among them the study of the phenomenon of superfluidity from a microscopic perspective 3,4 or the investigation of high-spin molecules. 5−7 Helium droplets as well as bulk superfluid helium 8 offer a unique spectroscopic matrix because of the weak interaction with dopants. He N with an internal temperature of 0.37 K can be easily combined with many spectroscopic techniques. Utilizing the method of resonant multiphoton ionization (REMPI) spectroscopy for the investigation of doped helium nanodroplets is well established and has recently enabled the study of unusual alkali metal− (He N ) Rydberg complexes 9−13 as well as tailored molecules and clusters. 14−16 The investigation of complex magnetic phenomena in small nanoclusters is of interest for both fundamental theory of magnetism and the development of novel electronic devices. In this context, chromium (Cr) atoms with their huge magnetic moment 17 are of special interest. Cr nanoclusters exhibit a rich magnetic behavior and unusual properties, which are highly dependent on their geometric structure and spin configuration. A fundamental example of such an unusual and spin-dependent effect is the Kondo response of the triangular Cr trimer. 18 Helium nanodroplets are known to favor the formation of highspin species, 6 which may offer a convenient way for the selective preparation of high-spin Cr nanoclusters and their subsequent surface deposition 19,20 under soft landing conditions. 21 Recently, we started the investigation of Cr atoms and clusters embedded in He N . 22−24 Mass spectroscopic studies demonstrated the formation of clusters consisting of up to 9 Cr atoms. 22 To gain deeper insight into the interaction between Cr atoms and the helium droplet, we focused on the spectroscopic study of single isolated Cr atoms in He N . These experiments shine light on the influence of the droplet on the electronic structure of the Cr atom as well as photoinduced dynamics by utilizing various spectroscopic methods such as laser-induced fluorescence (LIF) spectroscopy, beam depletion (BD) spectroscopy and one-color resonant two-photon ionization (1CR2PI). Our previous studies cover the y 7 P 2,3,4°a nd z 5 P 1,2,3°s tates. The y 7 P 2,3,4°← a 7 S 3 transition appears broadened (600 cm −1 ) and blue-shifted. In addition, transitions to discrete autoionizing (AI) states (g 5 D 2,3,4 and e 3 D 1,2,3 ), which are interacting with the ionization continuum, 23 were observed. Dispersed LIF spectra recorded upon excitation to y 7 P 2,3,4°s how narrow band bare atom emission from the y 7 P 2°, z 5 P 1,2,3°, and z 7 P 2,3,4°s tates. 24 Both observations demonstrate that a fraction of the Cr atoms is ejected from the droplets upon photoexcitation to the y 7 P 2,3,4°s tates. These experiments show that the dynamic processes induced by photoexcitation are governed by nonradiative, droplet-mediated relaxation mechanisms that result in the formation of bare Cr* atoms in various excited and metastable states.
In this article we extend our studies to previously uninvestigated spectral regimes. The utilization of two-color resonant two-photon ionization (2CR2PI) spectroscopy via the two strong ground state transitions (z 7 P 2,3,4°← a 7 S 3 and y 7 P 2,3,4°← a 7 S 3 ) offers new insights into the photoinduced dynamics of Cr−He N and the interactions between Cr and He N . 2CR2PI can be applied to energetically lower states due to the addition of a second laser with higher photon energy. In addition to the observation of bare atoms, we discuss the photoinduced generation of ground state Cr−He n complexes and excited Cr*−He n exciplexes. The formation of complexes consisting of excited atoms or molecules with several attached helium atoms is a general process initiated by the excitation of foreign species embedded in helium nanodroplets. These neutral "exciplexes" have been observed for example for surface bound atoms such as alkali metal 25−31 atoms and molecules and alkaline-earth metal atoms 32 as well as for species located inside the droplet. 33,34 Here we show that not only bare Cr atoms relax into the quintet or septet ground state but also the observed Cr−He n complexes. Different ionization pathways that are competing with relaxation mechanisms are reflected in the difference between 1CR2PI and 2CR2PI spectra. The study of the droplet size dependence of a selected transition shows additional characteristics about the relaxation and ejection mechanisms.
■ EXPERIMENTAL SECTION
The experimental setup has been described in detail in previous publications. 23,24,35,36 In brief, the He N beam is formed in a supersonic expansion of helium gas (purity 99.9999%) from a cooled nozzle (5 μm diameter, p 0 = 50 bar stagnation pressure, and T 0 = 10−24 K temperature). The droplet size is controlled by T 0 and follows a log−normal distribution with maximum values in the range N̂≈ 350 (T 0 = 24 K) to N̂≈ 8300 (T 0 = 13 K). The droplet beam is crossed at right angles along 10 mm of its path by an effusive Cr atom beam obtained from a homebuilt high temperature electron bombardment source arranged parallel below the droplet beam. 22 With this crossed beam geometryand additional five small apertures to collimate the droplet beamit can be ensured that no free atoms reach the detector. The heating power of the Cr source is optimized for single atom pick up (∼1700°C). The beam of Cr-doped He N is crossed at right angles by laser beams inside the extraction region of a quadrupole mass spectrometer (QMS, Balzers QMG 422). This setup allows 1CR2PI and 2CR2PI mass spectroscopy where either the laser wavelength is scanned and the mass filter is set to the most abundant Cr isotope of 52 u (56 u for Cr−He) or the detected mass is scanned for a fixed laser wavelength. For 2CR2PI one laser is always kept at a constant wavelength of 308 nm (32 468 cm −1 ).
For 1CR2PI the laser pulses are obtained from a dye laser (Lambda Physik FL3002, dyes: RDC 360 Neu for 27 600−28 800 cm −1 , Stilben 3 for 23 200−23 900 cm −1 , and Coumar-in307 for 19 100−20 100 cm −1 ) pumped by an excimer laser (Radiant Dyes RD-EXC-200, XeCl, 308 nm ≙ 32 468 cm −1 , ∼20 ns pulse duration, 100 Hz repetition rate). For 2CR2PI a fraction of the 308 nm light is branched off and guided to the ionization region. The temporal overlap with the ∼15 ns dye laser pulses is set to >10 ns. Both laser beams are moderately focused to a spot size of ∼5 mm 2 . For the two-color experiment both lasers are attenuated best possible to reduce the probability of dopant ionization by photons of a single wavelength. Reasonable pulse energies were found within 0.3−0.6 mJ for all four laser wavelength regimes (Stilben 3, RDC 360 Neu, Coumarin 307, and XeCl laser).
In principle, R2PI occurs via the absorption of two photons, which can be of the same or of different colors (1CR2PI and 2CR2PI, respectively). The tunable laser is scanned across a resonant state while the ion yield is recorded as a function of laser wavelength. For 2CR2PI a second laser with constant wavelength is present. Relaxation mechanisms after the first absorption step have to be taken into account. The energy level diagram of selected Cr atomic states 37 is shown in Figure 1 together with various excitation and ionization paths for 1CR2PI and 2CR2PI. Due to the Cr ionization limit at 54575.6 ± 0.3 cm −138 a successive absorption of at least two photons is required for ionization. The first step of the R2PI scheme is an excitation from the a 7 S 3 ground state (electron configuration: 3d 5 4s) to the excited states z 7 P 2,3,4°( 3d 5 4p) or y 7 P 2,3,4°( 3d 4 4s4p). The observed broadening and blue shift induced by the He N are indicated as shaded areas above these levels. Both ground state excitations are accomplished by the tunable dye lasers in the regime of 23 200−23 900 cm −1 for z 7 P 2,3,4°( red arrow in Figure 1) and 27 600−28 800 cm −1 for y 7 P 2,3,4°( blue arrow). Upon excitation to z 7 P 2,3,4°o nly a XeCl laser photon of 32 468 cm −1 (black arrow) has sufficient energy for ionization whereas for the y 7 P 2,3,4°s tates photons of both the dye laser and the XeCl laser are able to ionize. The lower limit for PI with a single XeCl laser photon is marked by a dotted horizontal line. As indicated by the results, bare ground and metastable state Cr atoms and Cr−He n complexes are produced in the course of 2CR2PI. 37 showing 1CR2PI and 2CR2PI paths as combination of dye-and XeCl laser excitations and ionizations. The shaded areas above the z 7 P 2,3,4°a nd y 7 P 2,3,4°s tates indicate the droplet broadened and shifted excitation region. The dashed arrows indicate nonradiative relaxation paths. The dotted horizontal line at 22 108 cm −1 marks the lower limit above which ionization with a single XeCl laser photon (32 468 cm −1 ) is possible. ■ RESULTS AND DISCUSSION y 7 P 2,3,4°← a 7 S 3 Excitation. We begin with the 2CR2PI excitation spectrum of the y 7 P 2,3,4°( 3d 4 4s4p) ← a 7 S 3 (3d 5 4s) transition (bottom of Figure 2) because it can be compared with the 1CR2PI spectrum of our previous work (top of Figure 2). 23 Cr + ions, which are detected at 52 u, are produced by indroplet excitation with the tunable dye laser to the intermediate y 7 P 2,3,4°s tates and subsequently ionized by a second photon either from the dye laser (1CR2PI) or from the XeCl laser (2CR2PI). In both 52 Cr + spectra the width of the droplet broadened feature appears stretched over about 600 cm −1 to the blue side of the y 7 P 2,3,4°← a 7 S 3 bare atom transitions 37 (indicated by triangles in Figure 2). This is also in agreement with BD and LIF spectra. 24 As discussed in more detail in ref 23, this broadening depends on the change in electron configuration and is moderate compared to the same Cr transitions obtained in heavy rare-gas matrices 39 and comparable to the excitation spectra of other atomic species in He N . 40 Although the onset of the broad structure occurs at the same wavelength in both cases, the 2CR2PI spectrum shows an additional shoulder in the range 27 900−28 100 cm −1 compared to the 1CR2PI spectrum. Upon in-droplet y 7 P 2,3,4°e xcitation, ejection of bare Cr atoms in excited z 5 P 1,2,3°s tates has previously been identified. 23,24 We take this increased signal in the 2CR2PI spectrum as indication for additional relaxation to states below z 5 P 1,2,3°. Because of the higher photon energy available in 2CR2PI (XeCl laser at 32 468 cm −1 ) as compared to 1CR2PI, a number of excited states that may be populated (e.g., a 3 P, z 7 P°, a 3 H, b 5 D, a 3 G, a 3 F, z 7 P°, cf., Figure 1) can add to the PI signal in the 2CR2PI scheme but not in 1CR2PI. A 2CR2PI scan in the same spectral region with the mass filter tuned to the 52 Cr + −He mass (gray line in Figure 2, bottom) reveals a spectrum that is comparable in shape to the 52 Cr + signal. This proves the formation of Cr*−He exciplexes during the relaxation−ejection process. The 52 Cr + −He signal is much weaker than that of 52 Cr + (different scaling factors have been used for the two traces). However, no conclusions can be drawn about the Cr*/Cr*−He ratio because excess energy of the ionizing photon leads to fragmentation of the exciplexes at an unknown rate.
The appearance of sharper features also depends on the ionization scheme. In the 1CR2PI spectrum (Figure 2, top) sharp lines above 28 000 cm −1 represent the bare atom transitions from the excited z 5 P 1,2,3°t o the autoionizing states g 5 D 2,3,4 and e 3 D 1,2,3 , as described in detail in ref 23. These transitions are barely visible in the 2CR2PI spectrum ( Figure 2, bottom black curve). Upon ejection of excited Cr*, the presence of XeCl radiation in the 2CR2PI scheme provides a second ionization channel. This leads to a strong increase of PI into the continuum that apparently outweighs the transition to an AI state by a second resonant dye laser photon. Below 28 000 cm −1 , narrow structures are present at the bare atom y 7 P 2,3,4°← a 7 S 3 line positions in the 2CR2PI spectrum but not at all in the 1CR2PI spectrum. These asymmetric lines with a wing on the blue side are present in neither the BD nor LIF spectra. 24 We take these features as proof for the generation of bare, ground state (a 7 S 3 ) Cr atoms and Cr−He n molecules related to XeCl laser excitation of the 2CR2PI scheme (see discussion below). Due to our crossed pickup geometry we can exclude that bare atoms reach the ionization region directly. This is proven by two facts: first, without helium droplets but with a heated Cr source, we see absolutely no ion signal and, second, the 1CR2PI spectrum ( Figure 2, top) has no sharp features at the bare atom line positions. Furthermore, we identify resonant excitations of Cr quintet metastable states ( Figure 5), which cannot originate from the Cr source.
Droplet Size Dependence of Relaxation Mechanisms. The relaxation dynamics are influenced by the size of the droplets and we obtain information about this dependency from the 1CR2PI spectrum in the range of the g 5 D 3 ← z 5 P 3°A I transition (28 180−28 210 cm −1 , see inset of Figure 2). The AI peak height is proportional to the number of bare Cr atoms ejected from the droplet in the z 5 P 3°s tate. The background signal, in contrast, corresponds to relaxation to other states that lie high enough in energy to be photoionized by the dye laser (e.g., y 7 P°, b 3 G, z 7 D°, b 3 P; note that all of these are higher in energy than z 5 P 3°) . Also, Cr*−He n exciplexes, even in case of a z 5 P 3°C r state, contribute to the background and not to the AI peak. Figure 3 shows the ratio of the AI peak height to the background. The peak height is obtained from a Gauss fit of the AI peak. Monitoring the ratio has the advantage that it is not influenced by variations of He N flux with nozzle temperature. The AI peak height decreases almost by a factor of 3 with respect to the background for an increase of droplet radius from ∼15 to ∼46 Å. We take this as indication that for increasingly larger droplets relaxation to lower states than z 5 P 3°o ccurs. In Ar Figure 2. Top: 1CR2PI spectrum of the y 7 P 2,3,4°( 3d 4 4s4p) ← a 7 S 3 (3d 5 4s) transition recorded by detecting 52 Cr + . 23 Bare atom ground state transitions are indicated by triangles. 37 The sharp lines can be assigned to transitions of bare, excited atoms to autoionizing states. The spectrum was recorded with T 0 = 20 K, and the inset shows scans in the g 5 D 3 ← z 5 P 2,3°r egion for other droplet sizes. Bottom: 2CR2PI spectrum of the same transition at comparable conditions, detected at the 52 Cr + mass (52 u, black curve) and 52 Cr + −He mass (56 u, gray curve). Different scaling factors are used for the two spectra. Detailed scans in the region of bare atom transitions are also shown. matrices, where the perturbation by the host matrix is stronger and not limited in time because of ejection, Cr excitation to z 7 P°leads primarily to relaxation to a 3 P states, which are lower in energy than states we find to be populated. 39 Although species on the He N surface also show relaxation, 25,32,41 our findings can be better compared to the Ag−He N system, where excitation to the droplet broadened Ag 2 P 3/2 structure leads to the ejection of bare Ag atoms in the 2 P 1/2 state 33 and increasing the droplet size leads to an increase of Ag 2 P 1/2 yield. The increased nonradiative transfer of population in excited Ag to the lowest excited state ( 2 P 1/2 ) for larger droplets supports our findings. In Cr the next lower state (z 7 F 6°) lies about 1000 cm −1 beneath z 5 P 3°a nd, within a few thousand wavenumbers, a multitude of states with all multiplicities can be found. 37 Increased relaxation to lower states for longer interaction with the helium droplet during ejection from larger droplets seems thus to be reasonable. z 7 P 2,3,4°← a 7 S 3 Excitation. The 2CR2PI spectrum via the z 7 P 2,3,4°( 3d 5 4p) as resonant intermediate state is shown in Figure 4 for the detection of 52 Cr + (black) and 52 Cr + −He ions (gray). Upon comparison of the 52 Cr + spectrum to beam depletion spectra, 24 it becomes evident that for both techniques the droplet broadened feature has the same onset at about 23 400 cm −1 and that it extends several hundred wavenumbers to the blue. For the 2CR2PI spectrum the spectral shape is given by the combination of in-droplet excitation, subsequent relaxation, ejection from the droplet and, finally, ionization. Only z 7 P°and a 3 P lie high enough in energy to be ionized by a XeCl laser photon (32 468 cm −1 ); relaxation to lower states will not contribute to the ion signal. We thus expect that a major part of the excited atoms relax to lower states and do not contribute to the 2CR2PI spectrum. The 52 Cr + −He signal above 23 400 cm −1 , although very weak, demonstrates the formation of Cr*−He exciplexes upon excitation to z 7 P 2,3,4°, as for y 7 P 2,3,4°e xcitation above (Figure 2). Similar to the y 7 P 2,3,4°i ntermediate state, narrow spectral structures appear in the 52 Cr + detected spectrum at the bare atom z 7 P 2,3,4°← a 7 S 3 line positions, also showing a wing toward higher wavenumbers. As will be discussed below, this is an indication for the presence of bare a 7 S 3 Cr atoms and Cr−He n ground-state molecules.
Formation of Quintet State Atoms. To examine the population of other states than the septet ground state, PI spectra were recorded with the dye laser scanning over the spectral regions of metastable quintet state transitions (green arrows in Figure 1) while the XeCl laser wavelength was fixed. As shown in Figure 5, we observe the population of the metastable a 5 S 2 (3d 5 4s) state. In yet another wavelength range we find evidence for population of another metastable quintet state by identifying the bare atom transitions originating from a 5 D (3d 4 4s 2 ) (not shown). This can be taken as another proof that the atoms cannot origin directly from the evaporation source but have to experience a droplet-mediated relaxation. Clearly, the spectrum in Figure 5 is dominated by the strong free atom z 5 P 1,2,3°( 3d 5 4p) ← a 5 S 2 (3d 5 4s) transition. The structure in the range 19 150−19 600 cm −1 is a superposition of a broad structure of unclear origin and sharp features at the bare atom z 5 P 1,2,3°← a 5 S 2 transitions. The broad structure is Figure 3. Ratio of atoms ionized through the g 5 D 3 autoionization state (g 5 D 3 ← z 5 P 3°t ransition) to atoms ionized into continuum states in dependence on the droplet size. The line represents a linear fit and serves as guide to the eye. Figure 4. 2CR2PI excitation spectrum of the z 7 P 2,3,4°( 3d 5 4p) ← a 7 S 3 (3d 5 4s) transition (T 0 = 20 K, N̂= 1300), recorded at the 52 Cr + mass (52 u, black curve) and 52 Cr + −He mass (56 u, gray curve) with different scaling factors. The inset shows high resolution scans at both masses with vertical offsets. Bare atom z 7 P 2,3,4°← a 7 S 3 transitions 37 are indicated by triangles. Figure 5. 2CR2PI excitation spectrum of the z 5 P 1,2,3°← a 5 S 2 and z 7 D 1,2°← a 5 S 2 transitions recorded at the 52 Cr + mass (52 u). Bare atom transitions are indicated with squares. 37 The gray curve serves to indicate a broad structure of unknown origin. schematically indicated with a gray line, starting to the red side of the free atom transitions and having a weak maximum at 19 300 cm −1 . The wing to the blue side of the z 5 P 1,2,3°← a 5 S 2 feature has thus a width of ∼50 cm −1 . Within the observed energy region also the z 7 D 1,2°( 3d 4 4s4p) ← a 5 S 2 (3d 5 4s) intercombination lines appear as sharp free atom transitions with wings to the blue side. We note that although these are intercombination lines, they are listed in literature for bare atoms. 42 Formation of Cr−He n Complexes and Cr*−He Exciplexes. Multiphoton ionization schemes of Cr−He N give rise to the detection of Cr + −He n complexes in our current study. The abundance of Cr + −He n with n > 1 is very low and the products are almost exclusively detected at the mass windows corresponding to Cr + atoms and Cr + −He in the examined spectral regimes.
The formation of exciplexes, consisting of excited atoms or molecules with several attached helium atoms, has been observed upon photoexcitation for various species inside and on the surface of helium nanodroplets. 25−28,32−34 In these experiments it has been shown that REMPI spectroscopy allows to draw conclusions on the formation of intermediate neutral exciplexes only with some reservation because it probes the resultant ionic complexes. If the ionizing laser photon energy is higher than the vertical ionization potential, the generated ionic complex can carry internal energy, which may cause the evaporation of helium atoms from the ionic complexes. Consequently, our REMPI mass spectra do not necessarily reflect the abundance of exciplexes; moreover the number of He atoms attached to the dopant will be underestimated. In our experiments, the formation of Cr*− He exciplexes is observed. The Cr + −He spectra presented in Figures 2 and 4 have a similar shape as the droplet broadened transitions monitored at the Cr + mass window. This shows that Cr*−He n exciplexes are formed and ejected upon photoexcitation with the dye laser. In contrast to other helium droplet isolation experiments, we find evidence for the presence of ground state Cr−He n complexes, which will be discussed in the following. These complexes are formed when the XeCl laser is present and are ionized by two-photon ionization; i.e., at least three photons are involved in the overall process that proceeds during the laser pulse duration (∼20 ns).
A striking difference between the 2CR2PI and 1CR2PI spectra in Figure 2 is the emerging sharp spectral lines that correspond to the bare atom y 7 P 2,3,4°← a 7 S 3 transitions. The spectrally sharp transitions are accompanied by a small wing that extends toward the blue side. These spectral features are exclusively observed if the XeCl laser pulse is present. Similar transitions can be seen in the 2CR2PI spectra shown in Figures 4 and 5. Note that these features appear in addition to the droplet broadened structures and, especially, they are not present at the y 7 P 4°← a 7 S 3 transition shown in the 1CR2PI spectrum (top Figure 2). In this region the dye laser excites a Cr−He N transition but does not give rise to a sharp spectral line. Furthermore, the sharp spectral lines shown in Figure 2 accompanied by blue wings are only present if the QMS is set to the Cr bare atom mass window.
Evidence for the connection of the blue wings to the formation of Cr−He n complexes can be seen in the inset of Figure 4 at the z 7 P 2°( 3d 5 4p) ← a 7 S 3 (3d 5 4s) transition. Therein the Cr + and Cr + −He ion yields are compared (Cr + is vertically offset and the two signals are scaled with different factors). It can be seen that in contrast to the Cr + signal, the small wing but not the sharp lines are observed at the Cr + −He mass (the low abundance of Cr + −He 2 forbids the recording of an excitation spectrum for the corresponding mass window). Hence the origin of the sharp peaks can be attributed unambiguously to bare atoms in the a 7 S 3 state. The fact that the wings are present in both mass windows and that they are only observed in the 2CR2PI spectra demonstrates that they must originate from Cr−He n (n ≥ 1) products generated by a XeCl laser UV photon. It is important to note that in this spectral region, below the onset of the droplet broadened transition at 23 400 cm −1 , the signal corresponds exclusively to an excitation spectrum of products formed by the XeCl laser. Above 23 400 cm −1 the production of ground state Cr and Cr−He n by the XeCl laser is competing with dye laser excitation of Cr−He N . We think that the absence of pronounced wings in the Cr + −He signal for the z 7 P 3,4°( 3d 5 4p) ← a 7 S 3 (3d 5 4s) transitions in Figure 4 is related to the competition between these two excitation paths. Alternatively, the ability of excited Cr atoms to bind He atoms might be higher for the z 7 P 2°s tate than for the z 7 P 3,4°s tates. At the y 7 P 2,3,4°( 3d 4 4s4p) ← a 7 S 3 (3d 5 4s) transitions in Figure 2 the signal-to-noise ratio in the Cr + − He signal was unfortunately too low, which forbids a comparison to the Cr + signal near the small blue wings.
The observation of these wings is remarkable because these Cr−He n complexes must be in their electronic septet or quintet ground state. Recent calculations of our group 43 show that the lowest Cr−He quintet and septet states are very weakly bound (a few wavenumbers, only one vibrational level is supported) with large internuclear separation (R e > 5 Å). Calculations for coinage metals show that the binding energy rises with increasing number of helium atoms attached to the metal atom. 44 For the ground state, the coinage metals with their completely filled d-orbitals and one electron in the s-orbital, and chromium with its half-filled d-orbitals and one s-electron are very similar in their interaction with He atoms, which is dictated mainly by the electron in the s-orbital. 43 Consequently, the observed spectrum suggests that larger Cr−He n complexes are formed upon UV excitation, followed by droplet-mediated relaxation via various routes into the electronic septet and quintet (and probably also into the triplet) ground states. Note that this process must be completed in less than 20 ns, the pulse duration of the synchronized excitation and ionization lasers. The observed narrow structures represent the spectral signature of a transition that originates from a very weakly bound ground state at large internuclear distances into the slightly repulsive part of an intermediate Cr−He n state. This is expected from the Cr−He diatomic potential energy curves in ref 43. The excess energy of the laser and the internal energy of the formed Cr + −He n complex will cause fragmentation of the intermediate complexes, which explains the observation of mainly Cr + and Cr + −He in the mass spectrum. Consequently, REMPI spectroscopy forbids conclusions on the size of the intermediate Cr−He n complexes. From the present data we cannot exclude a surface migration of Cr atoms upon UV excitation. A similar scenario has been suggested for excited NO* molecules on helium nanodroplets. 34 The investigated Cr transitions can be compared to the 4p ← 4s transition in potassium, which is located on the droplet surface. 45 Similar to Cr−He n , the potassium transition exhibits a characteristic narrow, asymmetric shape as well as a coincidence of the bare atom transition with the rising edge of the droplet broadened transition. More sophisticated calculations will assist the On the basis of our data we cannot draw conclusions on the process that underlies the formation of ground state and metastable Cr atoms and Cr−He n complexes because the spectral regime above the y 7 P°state is not covered by our dye laser. We propose two different scenarios for the production of ground state and metastable Cr atoms and Cr−He n . (i) Septet states are absent in the concerning spectral regime, but states with other multiplicities lie in the vicinity of the XeCl laser photon energy. Transitions from the septet ground state into states with other multiplicities, as they are observed for Cr ( Figure 5), may be excited and responsible for the production of ground state and metastable state complexes. (ii) At our experimental conditions, Cr dimers are present in a certain fraction of helium droplets. The excitation of a dimer transition in the concerning spectral region may give rise to the production of various products such as Cr + Cr*−He n , Cr* + Cr−He n , or Cr* 2 . The formation of Cr dimers is observed in helium nanodroplets, 22 and their spectra will be explored in the near future.
■ SUMMARY AND CONCLUSION
Chromium atoms doped to superfluid helium nanodroplets are investigated with one-and two-color resonant two-photon ionization spectroscopy (1CR2PI and 2CR2PI, respectively) via the y 7 P 2,3,4°r esonant intermediate states and with 2CR2PI via the z 7 P 2,3,4°s tates. We find two independent indications that nonradiative population transfer of excited Cr* atoms mediated by the droplet takes place to lower states than previously identified. 23,24 For the y 7 P 2,3,4°i ntermediate states, comparison of 1CR2PI and 2CR2PI is possible and an additional shoulder observed with 2CR2PI indicates the population of Cr* states that are too low in energy to be detected with 1CR2PI. Additionally, a decrease of z 5 P 3°p opulation of bare Cr* atoms for increasing droplet size also points toward relaxation to energetically lower states as the duration of interaction with helium during ejection is increased. The formation of Cr*−He n exciplexes upon in-droplet excitation of y 7 P°and z 7 P°is demonstrated by the fact that the excitation spectra obtained with Cr + and Cr + −He detection are identical.
All 2CR2PI spectra reveal sharp lines at the bare atom positions, which we attribute to the presence of the 308 nm XeCl laser. A fast (<20 ns) excitation−relaxation cycle produces bare Cr atoms in the septet ground state (a 7 S 3 ) as well as metastable quintet (a 5 S 2 and a 5 D) states, which are subsequently probed by 2CR2PI. All of these lines show a wing on their blue side, which indicates the presence of ground and metastable Cr−He n molecules. In addition, the detection of Cr + −He ions in spectral regions of the z 7 P 2°← a 7 S 3 wing verifies the presence of ground state Cr−He n complexes. Given the weak binding energy of ground-state septet and quintet Cr−He diatomic molecules 43 and the presumably only slightly stronger bond of Cr−He n complexes, 44 this observation is remarkable as it demonstrates that a complete relaxation to the Cr ground state has to take place inside the droplet.
The complex electronic structure of the Cr atom leads to even more complex electronic structure inside a helium droplet due to the perturbation by the surrounding helium. Several different relaxation channels to repulsive states that cause an ejection from the droplet might compete. The fact that Cr ions with attached helium atoms are detected in our experiments suggests that Cr−He n formation needs to be considered for the explanation of relaxation pathways and the description of the dynamics of Cr atoms inside helium droplets, calling for more sophisticated theoretical models. The Cr−He potential energy curves calculated in our group 43 can serve as a starting point for this task. Finally, it cannot be decided from our current data if ground state Cr atoms and Cr−He n complexes originate from excitation of single Cr atoms inside He N or a photoinduced dissociation of Cr dimers inside the droplet. Photoexcitation of Cr dimers is subject of our current research. | 8,071 | 2014-04-07T00:00:00.000 | [
"Physics"
] |
widgyts: Custom Jupyter Widgets for Interactive Data Exploration with yt
widgyts is a custom Jupyter widget library to assist in interactive data visualization and exploration with yt. yt (Turk et al., 2011) is a python package designed to read, process, and visualize multidimensional scientific data. yt allows users to ingest and visualize data from a variety of scientific domains with a nearly identical set of commands. Often, these datasets are large, sparse, complex, and located remotely. Creating a publication-quality figure of an area of interest for this data may take numerous exploratory visualizations and subsequent parameter-tuning events. The widgyts package allows for interactive exploratory visualization with yt, enabling users to more readily determine which parameters and selections they need to best display their data.
Summary
widgyts is a custom Jupyter widget library to assist in interactive data visualization and exploration with yt. yt (Turk et al., 2011) is a python package designed to read, process, and visualize multidimensional scientific data. yt allows users to ingest and visualize data from a variety of scientific domains with a nearly identical set of commands. Often, these datasets are large, sparse, complex, and located remotely. Creating a publication-quality figure of an area of interest for this data may take numerous exploratory visualizations and subsequent parameter-tuning events. The widgyts package allows for interactive exploratory visualization with yt, enabling users to more readily determine which parameters and selections they need to best display their data.
The widgyts package is built on the ipywidgets (Grout, Frederic, Corlay, & Ragan-Kelley, 2019) framework, which allows yt users to browse their data using a Jupyter notebook or a Jupyterlab instance. widgyts is developed on GitHub in the Data Exploration Lab organization. Issues, questions, new feature requests, and any other relevant discussion can be found at the source code repository (Munk & Turk, 2019).
Motivation
Data visualization and manipulation are integral to scientific discovery. A scientist may slice and pan through various regions of a dataset before finding a region they wish to share with colleagues. These events may also require shifting colormap settings, like the scale, type, or bounds, before the features are highlighted to effectively convey a message. Each of these interactions will require a new image to be calculated and displayed.
A number of packages in the python ecosystem use interactivity to help users parameter-tune their visualizations. Matplotlib (Hunter, 2007) and ITK (Ibanez et al., 2019) have custom widgets build on the ipywidgets framework (Corlay, Silvester, et al., 2019;McCormick et al., 2020) that act as supplements to their plots. This is the principle that widgyts follows as well. Other libraries like Bokeh (Bokeh Development Team, 2019) distribute interactive javascriptbacked widgets. Other frameworks like bqplot (Corlay, Sunkara, et al., 2019) have every plot returned with interactive features. The packages named here are by no means comprehensive; the python ecosystem is rich with interactive tools for visualization. However, it is illustrative of the need and investment in interactivity for the visualization community.
A common user case for visualization is to have data stored remotely on a server and some interface with which to interact with the data over the web. Because every plot interaction requires a new image calculation, this may result in significant data transfer needs. For this case, a request is sent to the server, which calculates the images, with every new plot interaction. When the request is sent the server calculates a new image, serializes it, and the image is sent back to the client. The total time to generate one image can generally be expressed as T server , where The total compute time spent on image generation is T server = n * t server , where n is the number of interactions with the figure.
widgyts modifies this process by shifting image calculation to occur client-side in the browser. Rather than image serialization and calculation happening on a remote server, a portion of the original data is uploaded into the WebAssembly backend of widgyts. The time to calculate image client-side can be expressed as: Subsequent interactions (n) with the image only affect the final two terms of the equation, so Thus, this becomes advantageous as T client < T server or n * t image calc, client + t pull, data < n * [t image calc, server + t pull, image ].
The time to pull an image or data is dependent on the data size and the transfer rate. T client will be lower than T server as the number of interactions n grows, as the size of the image (data image ) grows, and as the time to calculate the image on the client t image calc, client decreases.
Moving image calculation to the client requires a large initial cost of transferring a portion of the original data to the client, which may be substantially larger than the size of a single image. However, a dataset with sparse regions will be more efficient to transfer to the client and subsequently calculate and pixelize there. Pixelizing a dataset with large, sparse regions of low resolution, such as one calculated from an adaptive mesh, with a fixed higher resolution will require recalculating and sending pixel values for a region that may only be represented by a single value. Thus, for certain data representations this methodology also becomes advantageous.
The WebAssembly Backend
To allow for efficient data loading into the browser we chose to use Rust compiled to We-bAssembly. The WebAssembly backing of widgyts allows for binary, zero-copy storage of loaded data on the client side, and WebAssembly has been designed to interface well with JavaScript. Further, the primitive structure of WebAssembly reduces the time to calculate the image in the browser, thus reducing the time to calculate the image client-side. Finally, WebAssembly is executed in a sandboxed environment separate from other processes and is memory safe. At the time of writing, widgyts is the only webassembly-native backed visualization widget in the python ecosystem.
While yt can access data at an arbitrary location within the dataset, widgyts is structured to access any data within a 2D slice. Thus, only a slice of the data is uploaded client-side, not the entire dataset. For the large, sparse datasets that widgyts has been designed for, it would be infeasible to upload the entire dataset into the browser. A new slice in the third dimension will require an additional data upload from the server. Therefore, not all exploration of the dataset can be performed exclusively client-side.
Results
The following image is a simple timing comparison between using yt with the Jupyter widgets package ipywidgets and using the widgyts package on the same dataset. The dataset is the IsolatedGalaxy dataset; a commonly used example in the yt documentation consisting of a galaxy simulation perfomed on an adaptive mesh. The dataset has variable resolution and is sparse near the domain boundaries. A notebook is included in the widgyts repository (Munk & Turk, 2019) for one interested in replicating this analysis. Figure 1: A timing comparison between using ipywidgets with yt and widgyts on the IsolatedGalaxy dataset distributed with yt. The image generated by each tool is 512x512 pixels. Each timing point is based on a number of panning interactions in x, averaged over 10 measurements. The data points are accompanied by a 95% confidence interval. tools for interactivity may be faster than the naive implementation with ipywidgets that we've included here. However, these results remain an illustrative example that loading data into the browser and performing image recalculation in the browser is advantageous.
Conclusions
In this paper we introduced widgyts, a custom widget library to interactively visualize and explore data with yt. widgyts makes large, sparse, data exploration accessible by passing data to the browser with WebAssembly, allowing for image generation to occur client-side. As the number of interactions from the user increases and as datasets vary in sparsity, widgyts' features will allow for faster responsiveness. This will reduce the use of expensive compute resources (like those of a lab or campus cluster) and move parameter-tuning events to a local machine. | 1,877.4 | 2020-01-29T00:00:00.000 | [
"Computer Science"
] |
The Relationship between Socio-Economic Status , General Language Learning Outcome , and Beliefs about Language Learning
The objective of this study is to explore the probable relationship between Iranian students’ socioeconomic status, general language learning outcome, and their beliefs about language learning. To this end, 350 postgraduate students, doing English for specific courses at Islamic Azad University of Neyshabur participated in this study. They were grouped in terms of their socioeconomic status. They answered a questionnaire in which they indicated their beliefs about language learning in different contexts of language use. Besides, a general language test of proficiency (a Practice test of a TOEFL Test) was administered to all the participants to homogenize them in terms of general language proficiency or general language learning outcome. The quantitative data were subjected to a set of parametric statistical analyses, including descriptive statistics and factor analysis. The findings manifested a positive relationship between the students’ economic status and general language learning outcome. Besides, the findings manifested a significant relationship between the participants’ language learning outcome and their beliefs about language learning. The findings suggest if language instructors are equipped with the necessary information to assist language learners in coping with their negative beliefs, the process of language learning is not only accelerated, but also probable measurement errors may decrease.
Introduction
The idea that students' beliefs about foreign language learning have an influence on their success or failure in achieving competence in a foreign language is well documented (Horwitz, 1988;Jernigan, 2001;Kern, 1995;Miele, 1982;Rifkin, 2000;Strevens, 1978).The major findings of the above-mentioned studies indicate that individuals' positive or negative beliefs and perceptions about foreign language learning have a similarly positive or negative effect on their success.Mantle-Bromley (1995), for example, argued that positive beliefs about foreign language learning in relation to a positive learning environment such as trust-building between teachers and students facilitate foreign language learning.Horwitz (1987) argued that students' beliefs about foreign language learning affect the types of learning strategies that these students choose.Mirza (2001) in a study on the relationship between socioeconomic status and learning outcomes found that socioeconomic status of students has fairly significant effect on their learning outcomes.In fact, socioeconomic characteristics of students, which are examined to clarify students' learning outcomes, make the most common factor in sociology of education (Sirin, 2005).Mattheoudakis and Alexiou (2009) found that there are some advantages for the students from superior socioeconomic status over the students from socioeconomically advanced background.The students who relate to high social and economic classes are usually successful because they have open opportunities that are necessary to accelerate the learning process whereas people who belong to lower socioeconomic statuses deal with lack of resources (Akhtar & Niazi, 2011).Some other research studies showed that students from high socio economic status group earn higher test scores and better grades than the children from low group (Knapp & Shields, 1990;Reed & Sautter, 1990).The socioeconomic status of students is most commonly determined by combining educational level, occupational status, and income level (Jeynes, 2002).Hamid (2011) scrutinized the relationships between students' socioeconomic status and their learning outcomes.
The results showed that there were patterned relationships between the students' socioeconomic characteristics and their learning outcomes in English.Students who had higher levels of social and economic status were more likely to obtain higher scores on the proficiency test as well as higher grades in English.According to Babikkoi and Binti-Abdul-Razak (2014), socioeconomic status of learners is a fundamental factor that may contribute to English language learning outcomes.This is particularly because they are encouraged to learn.This is often not similar to situation in the low socioeconomic status, where students are not motivated to study.Therefore, investigating the learners' beliefs in the context of their varying characteristics such as socioeconomic status and what effect such a variable can have on a student's learning outcome is a main component of educational progress.As students bring their own attitudes, interests, and skills to the learning situation, and these beliefs and attitudes affect the opportunities for success for every student, socioeconomic status and beliefs about language learning are strong factors that teachers should take into consideration for educational progresses.Awareness of students' beliefs would result in students' working together with the same feelings and efforts.Understanding students' beliefs and feelings in language learning can lead teachers to adopt new teaching techniques that are fair and effective.In fact, knowledge of students' background and beliefs helps teachers examine their instructional and lesson plans and become sensitive to providing better learning and teaching conditions.
Review of the Related Literature
According to Ogunshola and Adewale (2012), the relation of society, education and economy are so crucial that training of a student is dependent upon the three factors.The learning outcome of the student has relationship with the student's social class, where not only the socioeconomic status plays its role but educational level also contributes its part.The groundwork for the research into students' beliefs was laid for the most part in the 1970s and 1980s, with studies that had emphasis in validating and defining key concepts, in which further studies could take place.Work like Bartley's (1970) article correlated belief with attrition, Gardner's (1985) exploration of the belief-dependent socioeconomic model of language learning and learners' attitudes (Horwitz, 1988) largely emphasized the task of operationalizing the target construct, crafting a survey from its primary identified components, and validated that survey.Important instruments such as the Foreign Language Attitude Scale (FLAS) (Bartley, 1970), the Attitude/Motivation Test Battery (AMTB) (Gardner, 1985), and the Beliefs about Language Learning Inventory (BALLI) (Horwitz, 1988) were the tools that were created, validated, or used in those studies.Other texts of the time that focused on individual student differences, notably Spolsky (1989) and Skehan (1989) also depended on these instruments to define these concepts.Some researchers showed medium to strong relationship between students' learning outcomes and socioeconomic status.That is, successful students belong to high social and economic classes and unsuccessful students belong to low socioeconomic status (Barry, 2005;Ewijk & Sleegers, 2010;Sirin, 2005).The socioeconomic status of a student is most commonly determined by combining educational level, occupational status, and income level (Jeynes, 2002).Studies have repeatedly found that socioeconomic status affect students' outcomes (Baharudin & Luster, 1998;Eamon 200;Jeynes 2002;Majoribanks, 1996;Mecneal 2001;Seyfried, 1998).Some other studies with the trait/student orientation have remained at the descriptive level, refraining from linking students' beliefs to outcomes.For instance, authors of studies using both structural equation modeling (Csizér & Dörnyei, 2005) and qualitative interviews (Graham, 2006) have provided detailed descriptions of motivation (in the former case) and self-efficacy (in the latter case).These researchers did suggest interestingly that the positive academic outcomes were expected, given the effort implied in highly motivated students or students with high self-efficacy.Descriptions of students' beliefs have also been common in the field, often focusing on specific aspects of language learning.According to Rad (2010) and Dörnyei (2005), beliefs and attitudes of learners about foreign languages are fundamental and in the focus of educational progress.Altan (2012) believed that at least some knowledge of English is urgent to make progress in life and work because it provides individuals with high social status and job opportunities.In fact, the seemingly stronger link between motivational factors, learners' beliefs, and socioeconomic status might be due to the highly segregated nature of education and the deep socioeconomic division among the investigated learners (Lamb, 2012).
Therefore, the purpose of this study is to explore the probable relationship between Iranian students' socioeconomic status, general language learning outcome, and their beliefs about language learning.Understanding the role of learners' goals, self-related beliefs and self-regulatory processes is essential before effective instructional programs for learners studying in different social contexts can be designed and implemented (Kormos & Kiddle, 2013).Socioeconomic status, however, does not only affect language learning outcomes but also has an influence on motivation to learn, self-regulation, and students' self-related beliefs (Fan, 2011).For Vellymalay (2012), social and economic factors provide educational resources for students and have the greatest impact on their learning outcomes.Social factors also motivate and help students to have better learning opportunities and educational conditions.The present study explores the probable relationship between Iranian students' socioeconomic status, general language learning outcome, and their beliefs about language learning.Concerning the mentioned points, the research questions of this study are formulated as the following: 1) Is there any relationship between socioeconomic status and general language learning outcomes of Iranian students?
2) Is there any relationship between general language learning outcomes of Iranian students and their socioeconomic status?
To probe the research questions, the method as well as the findings is discussed in the following sections.
Participants
The participants for this study were 350 Iranian postgraduate students of Management, doing English courses at Islamic Azad University in Neyshabur.They were in the 23-45 age range, forming a proportion of 58.28% males and 41.72% females.Participants were distributed across five social and economic groups.The majority of the students belonged to middle class (30.86%) and lower middle class (30%).20 % of the students were upper middle class, 12 % were lower class, and 7.14% belonged to the upper class.In fact, 24.85% of the students were unemployed and 75.15% were employed.
Instrument
The instrument was developed in two stages.In the first stage, many items were derived from existing instruments such as the Belief Inventory, developed by Horwitz (BALLI, 1988).In the second stage, in order to elicit some particular information about the participants' social and economic data, a Socioeconomic Status (SES) Questionnaire was used.In fact, it was constructed and reviewed by different experts of psychology, sociology, and languages, who give their feedback on the content.
The Beliefs about Language Learning Inventory (BALLI)
The questionnaire derived from Horwitz' (1987) 35-item Beliefs about Language Learning Inventory (BALLI) was utilized for this study.The questionnaire is a five-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree) for 33 items.The remaining two items have a different scale and different response options.They measure the level of difficulty of English (item 4) and the requisite time to learn a new language (item 15).The participants had to rate the statements on their beliefs about language learning.Beliefs about Language Learning Inventory (BALLI) explores five logical areas, i.e. beliefs dealing with foreign language aptitude (items 1,2,5,6,10,11,16,19,30,33,35), learning and communication strategies (item 9, 13, 14, 17, 18, 21, 22, 26 ), the nature of language learning (items 8, 12, 17, 23, 27, 28), the difficulty involved in learning (items 3, 4, 15, 25, 34), and motivations and expectations (items 20, 24, 29, 31, 32).To remove probable ambiguities, the BALLI questionnaire was translated into Persian, the participants' mother tongue.Before the actual administrations, the BALLI and socioeconomic status questionnaire were checked by some professors on psychology and language, who gave useful feedback on the content of the questionnaire as well as the clarity of the items.Then, BALLI questionnaire was piloted on a sample of about 40 students, whose feedback improved the items.
SES Questionnaire
"Socioeconomic Status Scale (SES) Questionnaire" was used to identify the Socioeconomic Status (SES) of the sampled students.It was reviewed by different experts of Psychology, Sociology, and Language, who give their feedback on the content.In the light of the experts' opinions, the instruments were finalized.
TOFEL Test
All the participants took a general language proficiency test of TOEFL to homogenize them in terms of language proficiency.For the purpose of this study, only the students at the intermediate proficiency level were selected.Thus, from the initial sample, consisting of 500 participants, only 350 participants, whose score was between + one standard deviation from the mean were recruited for the purpose of the study.It piloted on a sample of about 40 students to improve the items.The reliability analysis showed an overall Alpha Cronbach's value of .817,which was acceptable and high.
Procedure
To remove the probable ambiguities, the BALLI questionnaire was translated into Persian, the participants' mother tongue.Before the actual administrations, the BALLI and SES questionnaire were checked by some professors on psychology and language, who gave useful feedback on the content of the questionnaire as well as the clarity of the items.Then, BALLI piloted on a sample of about 40 students, whose feedback improved the items.Besides, a general language test of proficiency (a practice test of a TOEFL Test) was administered to all the participants to homogenize them in terms of language proficiency.Due to practical problems, only the reading, grammar and written expression sections were administered to the participants.A one-way ANOVA was conducted to determine the relationship between socioeconomic status and beliefs about language learning
Data Analysis
The statistical procedures used in the study were Cronbach alpha formula, descriptive statistics, a principal component analysis, and one-way analysis of variance.
Result and Discussion
The purpose of the current study was to determine to what extent socioeconomic status, general language learning outcome, and beliefs about language learning are related.The current data highlight the importance of socioeconomic status as a strong predictor of students' outcomes and principal variables of beliefs about language learning.To scrutinize the relationship between these factors, the statistical procedures were used.
Latent Variables Explored by the BALLI
In order to enhance the reliability of the results, a principal component analysis of the data was carried out.Table 1 summarizes the results of the principal component analysis of the data, gathered in this study as well as Horwitz's (1987) separation of items under her five themes.The factor, labeled foreign language aptitude, consists of 11 items, reflecting students' beliefs about language aptitude.Items 1, 2, 5, 6, 10, and 33 are concerned with the language learning attitude of students.As well, items 35, 11, 16, and 19 are concerned with disagreement about abilities in learning foreign languages.The second factor, labeled difficulty of language learning, consists of 5 items, representing students' beliefs related to the simplicity level or difficulty of language learning.The third factor, labeled the nature of language learning, comprises 6 items, reflecting students' attitudes about nature of language learning and dealing with the importance of English-speaking cultures and translation.Items 8, 12, and 28 focus on knowing the foreign culture and avoiding translation.The fourth factor, labeled learning and communication strategies, comprises 8 items, reflecting students' beliefs about practicing English language.The items under this factor emphasize the rehearsal and error avoidance strategies.The fifth factor, labeled motivation and expectations, comprises 5 items, reflecting attitudes about better opportunities.
Comparison of Factors
Using Principal Components Analysis (PCA), five factors were established, each corresponding to one of Horwitz's (1988) themes.Table 2 provides the summery of the results for five factors of beliefs about language learning.According to the statistical analysis, factor 4 received the highest mean average among the factors.This shows that all the participants believed that rehearsal of the language and learning strategies are a very important part.Students also reported high degrees of motivation.To them, motivation is a contributing factor in language learning.Factor 2 and 3 indicates that students lean toward difficulty and the nature of language learning.It suggests that students feel that the target language is of moderate difficulty.The results for factor 1 also represent that gender in language learning does not play a crucial role.
Relationship between Socioeconomic Status and Beliefs about Language Learning
A one-way ANOVA was conducted to determine the relationship between socioeconomic status and beliefs about language learning.Findings showed that the socioeconomic status of the learners has some influences on the learners' beliefs.Table 3 indicates the relationship between students' beliefs about language learning at different socioeconomic status levels through using one-way ANOVA.Table 3 shows that the difference among the mean scores of students' beliefs about language learning at different socioeconomic status levels (Upper Class, Upper Middle Class, Middle Class, Lower Middle, Class Lower Class) is significant since F 2 = 10.012,p= .000 . .
Socioeconomic Status Score
All the variables were scored and total scores were used to make different classes.The maximum scores obtained on socioeconomic status questionnaire by the students were 51 and the minimum score was 5.According to the obtained scores, 25% of the students are unemployed and lower classes have the highest proportion of unemployment.Table 4 indicates the relationship between social class and employment.percent of unemployed students belong to the lower classes.To see whether the socioeconomic status of the students and their outcomes were well distributed, related descriptive statistics (means and standard deviations) were used.As well, to study the relationship between socioeconomic status and learning outcomes of the students, the Pearson Coefficient Correlation was found suitable.Table 5 shows the means and standard deviations of socioeconomic status (SES).5 shows relatively large standard deviation, which indicates that socioeconomic status were spread out well.Table 6 shows the distribution of the participants in socioeconomic classes as well as the average.6 indicates that upper class and upper middle class got better scores than lower class and lower middle class.Means and standard deviations were used to see whether the outcomes were well spread out.Table 7 shows the means and standard deviations of students' learning outcomes.The table shows that students' learning outcomes were spread out well.To study the relationship between socioeconomic status and outcomes of the students, the Pearson coefficient correlation is used.Table 8 shows the results.The effect of socioeconomic status on education emphasizes the manner in which the socioeconomic status of students influences their beliefs, perception, and performance in the learning environment.In reviewing the table, some findings seem noteworthy.From the values listed in Table 8, it sounds that upper class students do the best.Upper middle class also appears to be keen on learning.In comparing upper and lower class, lower, middle and lower class have less interest in learning.This indicates that students who are from higher classes are interested in learning.In sum, the results suggest that upper students appear to start learning more than other groups.Looking at students in their status, the influences of social and economic classes appear to be increasingly important as lower students look less concerned with learning.
Relationship between Socioeconomic Status and Learning Outcome
The second research question pertains to the relationship between socioeconomic status and learning outcome.Table 8 indicates the correlation between students' outcome and their socioeconomic status, which suggest that there is a significant relationship between social classes and students' outcome.Upper class got the best scores and lower class got the lowest scores.The findings regarding the relationship between socioeconomic status and outcome have showed that upper class students were more eager to study than lower class students.It means that learning and social status are correlated.The students from upper middle class show a positive relationship with language learning and education.The performance of lower middle class students was not good, and also the performance of lower class was the worst.It can be concluded that socioeconomic status significantly affects language achievement of the students.
Conclusion
The purpose of this study was to investigate the types of beliefs of Iranian postgraduate students regarding their learning of a foreign language.As well, these beliefs and attitudes were compared with the effect of socioeconomic status on education.Overall, the results of this study suggest that the reasons participants study a foreign language have more to do with socioeconomic status.Moreover, this trend becomes more pronounced in upper class and upper middle class students, whose success, failure, and motivation to learn a foreign language are influenced nearly by social class.The influence of socioeconomic status as a strong predictor of students' outcomes is prevalent.In fact, understanding language learners is a matter of examining a variety of evidence about their learning of language.Factors that promote foreign language learning are twofold: socioeconomic status and a genuine interest in foreign languages as a field of study.Factors that discourage foreign language learning and learning achievement are probably related to decreasing beliefs such as social and economic statuses, interests, and motivation.It is important to generate more positive attitudes in their students in spite of apparent evolution in students' beliefs and attitudes as they progress.There exist a wide variety of factors that might be expected to influence foreign language learning.One of the most important factors is the effect of socioeconomic factors on the language learning process that was explored in this study.Based on the findings of this study, learners` socioeconomic status has significant influence on language learning beliefs and academic outcomes.
Table 2 .
Selected factors of BALLI questionnaire and descriptive statistics
Table 3 .
One-way ANOVA for students' beliefs about language learning at different socioeconomic status levels
Table 4
shows that 25 students belong to upper class, 70 students are from upper middle class, 108 students are from middle class, 108 students belong to the lower middle class, and 42 are from lower class.So the very majority of the students belong to middle class and lower middle class.What is clear about the table is that 24
Table 6 .
The distribution of the participants in socioeconomic classes based on SES
Table 7 .
Means and standard deviations of students' learning outcomes
Table 8 .
Correlation coefficients between students' socioeconomic status and learning outcomes | 4,797.8 | 2016-01-25T00:00:00.000 | [
"Linguistics"
] |
Three Feasibility Constraints on the Concept of Justice
The feasibility constraint on the concept of justice roughly states that a necessary (but not sufficient) condition for something to qualify as a conception of justice is that it is possible to achieve and maintain given the conditions of the human world. In this paper, I propose three alternative interpretations of this constraint that could be derived from different understandings of the Kantian formula ‘ought implies can’: the ability constraint, the motivational constraint and the institutional constraint. I argue that the three constraints constitute a sequence in the sense that accepting the motivational constraint presupposes that we accept the ability constraint, and to accept the institutional constraint presupposes that we accept both previous constraints. Adding the possibility of rejecting all three constraints, we get four distinct metatheoretical positions that theorists could take vis-à-vis the feasibility constraint on justice.
considerations the power to undermine the normative status of a proposed principle or ideal. 1 In particular, theorists disagree as to whether the ideal of justice is conceptually constrained in a way that grants feasibility considerations invalidating power.
The feasibility constraint on the concept of justice roughly states that a necessary (but not sufficient) condition for something to qualify as a conception of justice is that it is possible to achieve and maintain given the conditions of the human world (see Mason 2004;Gheaus 2013). Yet, this leaves open various interpretations of when and why something shall be conceived of as possible or impossible. A common reason why an ideal is not realized is that people, or institutions, fail to comply with its demands. But as is easily recognized, this alone does not show that the ideal cannot be realized. Often, all it shows is that people (or institutions) are behaving unjustly. Still, if a principle will under no conditions be complied with, there seems to be at least something to say for the view that the ideal it envisions is not achievable. The aim of this paper is to examine in which ways foreseeable noncompliance-voluntary or involuntary-with a proposed principle may be held to show that the principle violates the feasibility constraint. Drawing on different understandings of what 'ought implies can' means in political settings, I present three possible interpretations of the feasibility constraint on justice: the ability constraint, the motivational constraint and the institutional constraint. While all three constraints respond to foreseeable noncompliance with proposed principles, the forms of noncompliance vary, as do the reasons for granting them invalidating power. Importantly, I will take no stand for or against the feasibility constraint on the concept of justice. Instead, I will examine different forms that such a constraint could take, if it exists. While ability and motivation commonly figure in competing interpretations of 'ought implies can' (Estlund 2011(Estlund , 2015Wiens 2015b), the possibility of creating institutions that assure compliance with justice has, to my knowledge, never been proposed as an independent feasibility constraint. Nonetheless, I believe that what I call the institutional constraint is implicitly present in the reasoning of many authors, in particular those who commit to the view that justice is a specifically 'political' or 'practical' ideal. Through defining and comparing the three forms that the feasibility constraint on justice could take, and identifying the metatheoretical commitments that correspond to each of them, my paper aims to contribute to increased clarity in this tricky conceptual terrain. Before getting to the task, two clarifications are necessary: first, concerns for noncompliance are often taken to be part of the domain of nonideal theory. Yet, my present concern is not with nonideal theory, at least not in the sense of partial compliance-theory (Rawls 1999, p. 8). 2 While the noncompliance that gives rise to nonideal theory is noncompliance with principles of justice, the noncompliance of interest here is held to disqualify a proposed principle as a principle of justice, i.e. to show that the proposed principle is invalid. Noncompliance thus puts constraints on first, or ideal, principles of justice. 3 To accept this need not lead us to reject the assumption of full compliance characteristic of ideal theory. It is perfectly possible to combine this assumption, used as a heuristic device for evaluating principles, with a constraint on which principles we may bring up for evaluation (that is, which principles we may assume full compliance with) that is based on foreseeable noncompliance. To take a famous example, such a constraint is introduced by John Rawls's assumption that a conception of justice should generate its own support, given the laws of moral psychology (Rawls 1999, p. 119;Simmons 2010, p. 29). This demand becomes intelligible only if we take the possibility of noncompliance into account; it disqualifies principles which the parties in the original position would otherwise be tempted to choose, given that they assume full compliance with their chosen principles. Whether or not we hold that the assumption of full compliance is motivated at the most ideal level of theorizing-that is, the kind of theorizing that aims to identify first, or fundamental, principles of justice-we may ask if foreseeable noncompliance puts constraints on what justice may demand at this level. 4 Second, the term 'foreseeable noncompliance' involves an epistemic complication: since the noncompliance of concern here is one that will take place in the future, we can never be absolutely certain that it actually will occur. This raises the question of what counts as foreseeable noncompliance. Is noncompliance foreseeable only when we are (next to) certain that it will occur, or also when we judge its probability sufficiently high, say 75/100? Partly in response to the probabilistic nature of claims regarding what we can or cannot do, some authors have suggested that the binary view of feasibility as a matter of either/or should be replaced, or at least complemented, by a scalar account (see Brennan 2013, pp. 321-322;Gilabert and Lawford-Smith 2012, pp. 815-816;Lawford-Smith 2013b, pp. 243-244). How high we set the standards for something to count as feasible will determine which principles are actually ruled out by the feasibility constraint. Yet, since my present aim is simply to distinguish between different possible interpretations of this constraint, not to determine which principles would fail on each account, I can safely leave this problem aside for now.
The paper is structured as follows. The first section introduces the feasibility constraint on the concept of justice. In the following section, 'Three Feasibility 3 Every normative approach seems to give rise to a distinction between more and less ideal levels of theorizing. However concessive the standard of justice we opt for, it cannot identify whatever happens to be achievable at any given point in time with justice (Estlund 2008, p. 263;Galston 2010, p. 395). If this is true, the question of what we shall do if justice is not achievable is always present. The less ideal the level of theorizing, the more tragic will the choices that we are facing be (that is, the more injustice will our available options include) (Jubb 2012, p. 234). 4 Note that there may be different reasons for rejecting the assumption of full compliance at this level. While critics of Rawls belonging to the 'realist' camp think that it produces overly idealistic and therefore useless, or even dangerous, political ideals, critics such as Cohen think that Rawls's constructivist apparatus (to which the assumption of full compliance belongs) confuses fundamental principles with applied principles, and thereby fails to uncover what pure justice requires.
The Feasibility Constraint on the Concept of Justice
The feasibility constraint on the concept of justice is often held to follow from the Kantian formula 'ought implies can', which restricts the set of obligations an agent may have (Jensen 2009, p. 168). If I cannot do X, the thought goes, I cannot have a moral obligation to do X. Accordingly, advocates of the feasibility constraint on justice argue that justice cannot require that which we, individually or collectively, cannot achieve (Gheaus 2013, p. 446;Brennan 2013, p. 322). The claim is not simply that a principle cannot give rise to an 'ought' unless the factual circumstances permit this, that is, a claim about when and how a principle could be applied. Rather, what is at stake is whether justice could demand something that we could never achieve, given the conditions of the human world. 5 Whether or not we accept the feasibility constraint on justice depends largely on what kind of thing we think justice is, and whether or not we think that justice has a practical purpose (or, indeed, any purpose at all) (Cohen 2008, p. 267;Stemplowska and Swift 2012, p. 384). While an epistemic approach perceives of justice as a philosophical ideal that may or may not have practical implications, a practical approach holds that justice is the solution to a political problem and closely tied to the justification of power (Rawls 1980, p. 517;1999, p. 39;2001, p. 2;Cohen 2008, pp . 267, 327;Galston 2010, p. 390;Stemplowska and Swift 2012, p. 274). On the former, the domain of justice includes purely evaluative claims (that we shall not necessarily act on), whereas on the latter, all justice claims are prescriptive (i.e. action-guiding) 6 (Farrelly 2007, p. 845;Gilabert 2011, pp. 56-57;Stemplowska 5 This also means that the ideally feasible includes not only that which is directly feasible at one specific point in time, but also that which is indirectly possible to achieve through a chain of actions. Since every relevant sense of 'can' includes both first-and second-order abilities, ideal justice may give rise to dynamic duties, including duties to expand the feasible set of actions (see Jensen 2009, p. 176;Gilabert 2009, p. 677). 6 If justice includes evaluative claims, we may distinguish between fundamental and applied principles of justice, or between principles of justice and rules of regulation or institutional proposals (Cohen 2008, pp. 253, 279-280;Estlund 2011, p. 215). If, to the contrary, justice claims are always prescriptive, we will hold that principles of justice, by their very nature, are applied and regulative for social institutions (See Schmidtz 2011Schmidtz , p. 779). 2008Miller 2013, p. 34). In order for a claim to be prescriptive, it must be prescriptive for someone. On the practical account, then, justice is a quality of the conduct of agents, and there can be no injustice where no agent is violating any obligation of hers 7 (Anderson 2010, p. 5). This explains why the practical approach is committed to the feasibility constraint on justice: if justice is a quality of the conduct of agents, it is also conceptually constrained by 'ought implies can' 8 (but see Mason 2004, p. 257). Still, if every obstacle that may hinder any agent at any point in time from acting on a principle would be allowed to constrain the concept of justice, justice could hardly demand anything at all. Instead of providing action-guiding recommendations for all possible circumstances, then, ideal principles of justice should guide our actions in the most favorable circumstances that we could realistically hope for (henceforth: 'ideal circumstances') 9 (Rawls 1999, p. 176;Mason 2004, p. 252. See also Wiens 2015a, p. 437;Chahboun 2015, p. 235).
Yet, to know what makes us accept or reject the feasibility constraint on justice tells us nothing about what this constraint actually rules out. As stated previously, I will here examine if and when foreseeable noncompliance with a proposed principle of justice shows that the principle violates the feasibility constraint. If justice can only demand that which we, in ideal circumstances, can bring about, foreseeable noncompliance will disqualify a proposed principle of justice if the noncompliance follows from the fact that we cannot comply. But what exactly does 'cannot' mean? As we shall see, there are several possible replies to this question, producing various constraints on the concept of justice.
Three Feasibility Constraints
Whether a proposed principle violates 'ought implies can' or not depends on what, exactly, we mean by 'can'. 'Can' always refers to the capacity of a specific, individual or collective, agent (Lawford-Smith 2013b, p. 247). For simplicity, I will assume that in political settings, the relevant agents are individual citizens or groups of citizens (acting collectively) of a state. What, then, does it mean that an agent of this kind 'can' do something? As David Estlund notes, two competing 7 Some may protest that obligations never spring directly from ideals and principles, but always from an application of these principles which also incorporates concerns for empirical constraints. If ideal justice is the state of affairs X, then the action-guiding recommendation corresponding to this ideal is not 'realize X', but 'realize X to the extent that it is possible'. As previously stated, I think that this distinction between applied and fundamental principles is precisely what the practical approach denies. 8 I thus disagree with Anca Gheaus's view that rejecting the feasibility constraint could enhance the action-guiding potential of justice 'by providing an aspirational ideal' (Gheaus 2013, p. 448). In order for an ideal to be aspirational, it must be feasible in at least the strictest sense of the term. To reject the feasibility constraint on justice as such (and not only some of the possible senses of 'can' discussed in this paper) is thus to accept that justice may be not only a 'hopeless', but also an impossible ideal (Estlund 2008, pp. 264-267; see also Wiens 2014). 9 Cohen (2008, p. 280) rejects the identification of first or fundamental principles with those that apply in ideal circumstances. In his view, applied principles are constituted by a mix of fundamental principles and facts. Hence, applied principles are never fundamental, and fundamental principles do not apply to any particular circumstances. Since my concern here is with views that disagree with Cohen on this point, I will make no assessment of his critique. (Estlund 2011, p. 210). I will call the feasibility constraint resulting from the wide understanding of 'can' the ability constraint on justice.
The ability constraint: if a proposed principle demands that which we, even under ideal circumstances, cannot do (even if we were to try and not give up), it must be rejected as a principle of justice.
On the wide reading of 'can', the feasible includes all those acts that we, the members of a society, could, individually or collectively, perform given that we all were to try and not give up. Since trying is assumed on this account, the noncompliance involved here may be characterized as involuntary. In the case of justice, two different kinds of inability may be held to produce involuntary noncompliance. The first is the inability to know which acts would bring justice about (even if we could perform these acts, would we only know which they were). The other is the inability to perform the required act (even if we know that it would bring justice about). Views diverge as to whether or not the first of these inabilities (call it 'epistemic inability') really shows that a principle of justice violates 'ought implies can' (see Howard-Snyder 1997;Carlson 1999). Whether or not we include epistemic inability in our definition will have great implications for which principles are ruled out by the ability constraint. But again, since my present aim is simply to conceptually distinguish this from other possible interpretations of the feasibility constraint on justice, not to determine its scope, I need to take no stand on this issue.
On the second, narrow, reading, 'can' instead refers to those acts that we can bring ourselves to do, that is, to the acts that we can will to do. On the narrow reading, whether or not we can do something depends on our ability to motivate ourselves to do it (Estlund 2011, pp. 210-212). As everybody knows, the motivation to fulfil one's duties varies highly among individuals. Still, what we can will is different from what we simply want; mere weakness of will cannot relieve us from our duties. Rather than tracking the actual moral capacities of individuals, the narrow understanding of 'can' is commonly held to include all things that we as human beings can bring ourselves to do given our most fundamental, common psychological traits, often summoned under the label 'human nature' (see Nagel 1991, p. 6;Galston 2010, p. 406;Estlund 2011, pp. 210-212;Hall 2013, p. 178;Miller 2013, p. 16). This seems to render what we can do a matter of statistical probability; we ground our assumptions of which acts are feasible on the frequency of their occurrence in the historical record of humanity. In this sense, ought, on the narrow reading of can, implies 'reasonably likely' (Estlund 2008, p. 265;Lawford-Smith 2013b, pp. 253-254;Wiens 2015b). Very few acts, it would seem, are ruled out because no human being can bring herself to perform them. Some people voluntarily live in chastity and poverty, risk their lives trying to rescue others, donate one of their kidneys to a stranger or give away their children to common custody. And even if we may call such persons exceptional in various ways, we usually have no reason to call their humanity into question (see Estlund 2011, pp. 221-222). Still, most people would be unable to bring themselves to perform such acts, and this has led defenders of the narrow reading to conclude that a demand to do so would violate 'ought implies can'. I will call the feasibility constraint resulting from the narrow understanding of 'can' the motivational constraint on justice.
The motivational constraint: if a proposed principle demands that which human beings generally, even under ideal circumstances, are unable to bring themselves to do, it must be rejected as a principle of justice.
On the narrow reading of 'can', the feasible includes those acts that we, the members of society, could bring ourselves to perform, individually and collectively, given the common psychological traits constitutive of our human nature. Critics of this view have argued that it produces an overly concessive understanding of justice and that the intellectual process leading us to affirm it constitutes a form of 'adaptive preference formation' (Cohen 1995, p. 253). Why, we may ask, should the mere fact that people, due to some natural inclination, cannot motivate themselves to do something lead us to conclude that this act is not a requirement of justice? Why not rather conclude that people are naturally inclined towards injustice? (Estlund 2011, p. 224;Mason 2004, p. 260). One possible reply states that if justice is action-guiding for political institutions, and the purpose of such institutions is to mitigate the effects of our limited capacity for altruistic behavior, then principles of justice must concede to this aspect of human nature rather than condemn it. If a proposed principle ignores human nature, it will never provide the guidance needed to inform political action, since politics is a response to problems that arise precisely due to human nature (see Gilabert and Lawford-Smith 2012, p. 813;Miller 2013, pp. 25-26). Another possible defence of the narrow reading rejects the distinction between 'can do' and 'can will to do', and argues that 'can do', on all plausible readings, actually entails 'can will to do' (see Wiens 2015b). To determine the real status of the narrow reading of 'can' falls outside the scope of this paper. For my present purposes, it suffices to convince the reader that the motivational constraint constitutes a serious proposal worthy of consideration. I hope hereby to have done so.
Apart from the wide and narrow readings, we may distinguish between self-and other-oriented understandings of 'can'. If an account of justice prescribes that a person performs certain acts without showing any interest in her reasons for so doing (reasons that would be crucial to conceptions of justice that also place demands on persons' intentions or virtues), what is feasible is not only a matter of what this person can do or will to do, but also of what she can be brought to do by others (Lawford-Smith 2013b, p. 251). A possible justification of this understanding of 'can' may be borrowed from the growing field of realist political theory. Realists criticize idealists for mistaking politics for morality, and urge us to recognize politics as a domain characterized by conflict and power struggle (Geuss 2008, p. 25;Horton 2010, p. 434;Rossi 2010, p. 504;Philp 2012, p. 633). This seems to entail recognizing that which political ideals we are able to realize is a matter not only of what we (the members of society) can do or will to do, but also of what we (the promotors of a particular political ideal) can make others (our antagonists) do, Three Feasibility Constraints on the Concept of Justice 437 despite their initial or persistent denial of these acts' desirability (see Galston 2010, p. 395;Horton 2010, p. 443;Sleat 2013, pp. 53-55). In what follows, I will assume that in political settings our means for making others do things are mainly institutional. If we think that politics is a domain characterized by conflicting interests and profound disagreement resulting in power struggle, then, a solution to the political problem must include institutional arrangements that have the power to make those comply who do not themselves affirm the principles or ideals that the institutions embody. On the view that I am proposing, 'can' in political settings thus refers to those acts that we can make others do through the creation of controlling and coercive institutions. 10 If, even in ideal circumstances, no institutional arrangement that we ('the vanguard of justice') could bring about would make dissenters comply with a proposed principle, the principle will be rejected. For simplicity, I will here assume that all institutional arrangements that could be put in place by human beings (in ideal circumstances) are available also to the vanguard of justice. 11 This assumption does not sidestep the problem at hand, since, once in place, the institutions would still be required to possess the controlling and coercing powers necessary to make potential noncompliers comply. I will call the feasibility constraint resulting from the other-regarding understanding of 'can' the institutional constraint on justice (see Mason 2004, p. 256;Barry and Valentini 2009, p. 508).
The institutional constraint: if a proposed principle demands that which we, even under ideal circumstances, are unable to make others do (through the creation of coercive institutions), it must be rejected as a principle of justice.
Note that our inability to make others do something need not correspond to any inability on their part. There may be things that human beings could easily do or bring themselves to do which we will never be able to make them do. Noncompliance due to mere weakness of will may, on this view, render an ideal infeasible, and thus the principle that recommends it invalid, if no institutional arrangement could make the noncompliers comply. 12 Of the three constraints I have presented, the institutional constraint is probably the least familiar and most controversial one. In order to avoid misunderstandings and respond to some immediate objections it is likely to provoke, the following section explains this idea in further detail.
The Institutional Constraint, Some Clarifications
In order to avoid confusion, the institutional constraint should be distinguished from three related, but non-identical, views. First, the institutional constraint is distinct from institutionalism, i.e. the view that principles of justice apply exclusively to social institutions and not to individual choices within these institutions (apart from the choice of complying or not with the rules set by the institutions) (Rawls 1999, pp. 7, 99;Cohen 1997, p. 3;Nagel 1991, p. 18). While the institutional constraint seems to presuppose that we accept institutionalism (why else think that the feasibility of institutional arrangements put constraints on justice?), the reverse is not true: even on an institutionalist account, the rules set by just institutions may be such that they demand more than citizens will ever comply with, or even more than they can or can will to comply with. While accepting institutionalism seems like a necessary condition for accepting the institutional constraint, it thus provides less than sufficient reason for doing so. Second, the institutional constraint is distinct from what may be called the associative account of justice, i.e. the view that justice applies only to the specific relations that arise among people who live under shared coercive institutions. On this view, institutional arrangements are not merely means for realizing justice; they are the very origin of justice relations between individuals (Nagel 2005, pp. 120-121). Later in this section, I will argue that a common defence of the associative view also offers a reason to accept the institutional constraint. Still, the two are different, since the associative view privileges actually existing institutions by assuming that relations of justice appear exclusively within their realm. 13 The institutional constraint implies nothing of the kind. It considers only whether or not a required institutional arrangement could be put in place in ideal circumstances, not if it currently exists. Third, the institutional constraint is distinct from practice-dependence, according to which the content, scope and justification of political ideals are determined by existing practices and institutions (Sangiovanni 2008, p. 138). Practice-dependence, or at least one version of it, incorporates both institutionalism and what I have called the associative view of justice (see Sangiovanni 2008, p. 138, 140). But it adds to these the view that ideals are identified and justified through an interpretative process which (1) carefully describes the point and purpose of an existing institutional system, and (2) from that description presents an argument of what the institutional system should be like (Sangiovanni 2008, pp. 142-143). That ideals are extracted from existing institutions does not entail that those institutions are able to fully realize their point and purpose (if so, there would be no need for the second step of interpretation), nor does it seem to entail that any feasible institutional arrangement could do so. Part of the point and purpose of the institution of the last anointing is to 13 For the view that existing institutions may constitute a 'soft' feasibility constraint, see Lawford-Smith (2013b, p. 255 absolve the dying person of her sins and prepare her for the passage over to the afterlife. Part of the point and purpose of the UN Security Council is to secure world peace. Yet, it is doubtful whether any institutional arrangement could successfully provide any of these outcomes. Hence, to affirm practice-dependence does not necessarily entail that we affirm the institutional constraint. Further, I have claimed that the institutional constraint is distinct from the motivational constraint. But is it really? What if the only things we cannot, in ideal circumstances, make others do are those that violate human nature? Perhaps one could even argue that the only way for us to see that a proposed principle violates human nature is through the realization that no institutional arrangement could ensure compliance with it. Take the previous example of giving away your children to communal upbringing as a case in point. I agree that this objection has a certain appeal. Still, the definition it proposes strikes me as overly strict. Surely, there are things that we could bring ourselves to do that no institutional arrangement could make us do. To exemplify, take the act consequentialist proposal that justice demands that we grant equal consideration to the interests of all people, including ourselves, when making decisions in our everyday life. Setting aside the epistemic problem of how to know how different decisions affect the interests of others, I think most would agree that this principle demands more than most people could bring themselves to do. 14 It violates, in other words, the motivational constraint on justice. Now, take the slightly different proposal that we must grant not equal, but still some, consideration to the interest of others when making everyday decisions (Scheffler 1994, p. 20). To be sure, the demandingness of this proposal depends on exactly how much consideration we are to grant the interest of others, but it is easy to think of versions of this ideal that does not conflict with human nature. Still, it is hard to see what kind of institutional arrangement could make people comply with it. Even assuming that we could easily tell whether or not a specific decision conforms with the principle, the mere vastness of decisions (not to mention nondecisions) facing persons in their everyday life would render it impossible to keep a record over the decisions made by every member of a society. And if we cannot tell compliers from noncompliers, no institutional arrangement is likely to be able to render sufficiently sure that enough people will comply. One may object that even in the absence of effective control mechanisms there may be institutional arrangements that could make people comply, not by forcing them to do so, but by convincing them of the moral rightness of the principle or ideal in question. To inform persons' moral views is undoubtedly a crucial task not only of educative, but also of legal and retributive institutions. Yet, recall that what I have called the institutional constraint assumes permanent dissensus on moral and political ideals. Institutions that could make others comply are necessary precisely because we acknowledge that these others will deny the moral status of our proposed principle. For this reason, the controlling and coercive aspects of institutions are crucial for the institutional constraint. Does the assumption of permanent dissensus contradict my previous 14 Note that this critique is different from the common objection that act consequentialism undermines agents' integrity, though this too, often falls back on the notion of 'human nature' (see Scheffler 1994, pp. 3, 7-8). assertion that the constraints on justice here considered apply also in ideal circumstances? No. Recall that we defined ideal circumstances as the best circumstances that we could realistically hope for. On the realist view, we could never realistically hope for something like a 'rational' or 'overlapping' consensus on what justice requires, and this is why the lack of feasible institutional arrangements that could make dissenters comply will invalidate a proposed principle of justice.
Last, the differentiation between an 'us' who are already motivated to build just institutions and a 'them' who should be incentivized to comply with them seems troubling from a normative point of view. It cries out for an investigation of who the bearer of political duties is. Even if we accept that we (the vanguard of justice) cannot have an obligation to make others comply with the demands of justice if we do not have the ability to make them do so, it is far from obvious that these others have no obligation to comply (cf. Lawford-Smith 2013a, pp. 659-660). Of course, we could take it as a given that they simply will not fulfil this obligation and ask what justice demands of us to do in this case. But this would distance us from the assumption of ideal circumstances, and thus render our inquiry one about nonideal theory alone. To assume that our present obligations are the result of a previously unfulfilled obligation presupposes a more ideal level of theorizing where noncompliance with this obligation is not taken for granted. If it is true that we must take noncompliance into consideration even at the most ideal level of theorizing, noncompliers can have no duty to comply even though we (the vanguard of justice) have a duty to make them do so. But why, one may ask, accept the seemingly aristocratic idea that political obligations fall only on the vanguard of justice? 15 Surely, such obligations must be shared by all (or most), and not only some, members of a political community? 16 I certainly do not wish to suggest that only those who value (a particular conception of) justice have a duty to comply with its demands. Still, there might be a way to arrive at the institutional constraint on justice. This passes through the observation that principles of justice give rise to collective rather than individual obligations, and that for collective obligations to distribute to individuals, some likelihood that other members of the collective will comply is required. Since collective acts are made up by several individual acts, there will always be a minimum of individuals required for a collective act to take place. If you and I have a collective duty to move a dining table, but you will not fulfil your part of the duty, I cannot fulfil mine (given that I cannot lift the table on my own). Even if I try, what I end up doing is not 'helping to move the table' but futilely pulling one end of it. Only if other members of the collective fulfil their shares of a duty, or, on an internalist view, if we have reason to believe that they will do so, are we required to do ours (Lawford-Smith 2012, pp. 456-457). This insight may lead to the conclusion that collective duties only distribute to individuals when there is some sort of assurance that others will contribute their 15 If society is organized in a way that concentrates political power in the hands of a few, these arguably have a greater responsibility than the powerless majority for directing society towards the ideal. Still, this does not mean that if the minority tries to bring justice about, the majority has no duty to help. 16 Yet, see Philp (2010) who seems to hold that political theory should provide action-guiding recommendations specifically for politicians. share. And one way to arrive at this kind of assurance is through the creation of state institutions which provide incentives to comply. This brings us back to what I have called the associative account of justice. On this view, justice requires coordinated collective action that can only be sufficiently assured through legal institutions backed up by a monopoly of force. Thomas Nagel offers two separate justifications for this view: first, if there is no assurance that others will comply with the demands of justice, individuals will lack a reason to comply, since only when all comply will the collective interest coincide with the individuals' self-interest. This justification falls back on the Hobbesian assumption that duties must be possible to motivate through appeal to self-interest. But even if we reject the Hobbesian view, Nagel maintains that our obligations are dependent on some sort of assurance that our acts are part of a reliable and effective system. The idea is not simply that absent such assurance and individuals will lack the motive to do what justice requires, but also-and importantly-that they will lack the opportunity to do so (Nagel 2005, pp. 115-116). This is the problem of collective duties; if not enough people contribute, we cannot do what would ideally be required by us (helping to move the table rather than pulling its end). If we think that principles of justice, in ideal circumstances, must translate perfectly (that is, without the loss indicating a move from ideal to nonideal theory) into obligations of individual agents, we must therefore reject principles which demand that which we cannot make others do, through the creation of coercive institutions (cf. Mason 2004, pp. 256-257). Though far from conclusive, I think this offers a fairly convincing provisional defence of the institutional constraint-convincing enough for us to consider this position seriously. But to accept the institutional constraint on justice brings us to a further problem. What if the only institutions that can assure the compliance necessary for collective obligations to distribute to individuals are themselves undesirable? This undoubtedly sounds like a matter of desirability rather than feasibility. Yet, as I will show, if it is not a feasibility problem, it is hard to see why we should think it is a problem at all.
The Moral Costs of Assuring Compliance
According to the institutional constraint on justice, the acceptability of an ideal depends on what we can make others do, through the creation of controlling and coercive institutions. Now, what we can make others do is often a matter of how powerful our means for convincing them to do it are. Suppose that a proposed principle of justice requires that people contribute to society according to their ability. Suppose further that the only way for us to make dissenters comply with this principle is to torture them. Given that torture could be institutionalized, the proposed principle does not violate the institutional constraint. Yet, to torture people seems like a morally outrageous thing to do. And this brings us to the problem of moral costs of assuring compliance with justice. The idea is this: if it would be absurd to suggest that we have a moral obligation to torture people, it would be equally absurd to suggest that we have an obligation to make people contribute according to ability if torture is the only way for us to achieve this outcome. If it is possible to enforce a principle only through deeply undesirable means, such as extensive surveillance, coercive indoctrination, shameful revelations or threats of brutal punishments, it would be absurd to affirm this principle as a principle of justice (see Wolff 1998, p. 114;Rawls 1999, p. 452;Mason 2004, p. 256;Schmidtz 2011, p. 788;Hall 2013, pp. 177-178;Jubb 2015, p. 929). The problem arises because justice is often perceived both as a quality of political institutions, and the effects of these institutions on society. Let us say that a tax system is just if it brings about a certain distributive pattern. Yet, apart from producing this pattern, a just tax system must also not violate people's integrity by, for instance, letting taxmen break into people's homes in order to disclose attempts at tax evasion. This seems to suggest that apart from the three aforementioned constraints, there is a forth, moral cost-constraint, on justice. I will soon proceed to show why this conclusion is mistaken. First, however, I want to grant that the proposed objection is not, at least not necessarily, a sign of mere conceptual confusion. Some would argue that the moral cost-objection confuses the desirability of a principle with its enforceability. What people ought to do and what others (the state) have a right to make people do, they would say, are two entirely different things. The mere fact that I have a duty to do something does not sanction you forcing me to do it. In some cases at least, I have a right not to be forced to do my moral duty (Otsuka 2008, p. 449). Yet, the status of this right seems less clear when it comes to political duties. Many theorists agree with Kant that duties of justice are by definition those that may be institutionalized and enforced (see e.g. Tan 2004, p. 361). For the sake of argument, I will here accept the Kantian view. By doing so, I grant that the moral costs of ensuring compliance with a principle could, on some accounts, affect its desirability. But this does not warrant a separate moral cost-constraint, since, as I intend to show, the problem this is supposed to address is already covered by the institutional feasibility constraint. Juha Räikkä has suggested that if an ideal can be realized only at high moral costs, this may give rise to a special kind of feasibility constraint. 'When the necessary moral costs of changeover are taken into account in the evaluation of feasibility', he argues, 'it becomes partly a normative matter to decide which institutional arrangements are feasible and which are not' (Räikkä 1998, p. 37). Räikkä's argument is specifically about transition costs. But since matters of transition are part of nonideal rather than ideal theory, the for us more urgent problem is what would happen if assuring compliance with justice would continue to be morally costly once the initial transfer from unjust to just institutions were completed. I see no problem with extending Räikkä's proposal to this slightly different situation. Still, I think we have good reasons to resist the idea of moral feasibility constraints of the kind Räikkä points to. As argued by Gilabert and Lawford-Smith, however strong we hold the moral reasons against something to be, they do not have the metaphysical status to ensure that a proposed ideal cannot be realized (Gilabert and Lawford-Smith 2012, p. 817; see also Brennan 2013, p. 328).
Recall that the claim of concern here is that justice is restricted not only by what we can bring about, but also by what we can bring about through reasonable means. Rather than feasibility, this looks like a matter of desirability. Must the means needed to realize justice (rather than justice itself) be desirable? In other words, are the costs of assuring compliance with just institutions (under ideal circumstances) internal or external to justice itself? (Hall 2013, p. 176). How we respond to this question seems largely dependent on whether we perceive of justice as 'all-thingsconsidered' or not.
Justice as all-things-considered is most easily understood in contrast to what is commonly termed value pluralism. For defenders of the pluralist view, justice is not the 'first virtue of social institutions', but only one among several values which are not only distinct but may also conflict (Cohen 2008, p. 286;cf. Rawls 1999, p. 3). While all are valuable, none is worth pursuing at all costs. We commonly think about values, even political values, in this way. Take, for instance, freedom and security. While both are valuable, most agree that they must be balanced against each other to reach a morally optimal outcome. To accept this is not to say that the optimal level of freedom (when balanced against security) constitutes 'real' or 'political' freedom. It is simply to say that this is the right amount of freedom given the necessary trade-off. Yet, Rawls has taught us to think of justice differently, as an ideal which gives rise to individual claim rights and which must therefore not be compromised but for the sake of justice itself. Rather than trading justice off against other values, Rawls calls the outcome of the trade-off among different values 'justice' (Cohen 2008, pp. 303-305).
The pluralist view can easily deal with the moral costs of bringing justice about. Since justice is just one value among many, it will need to be traded off against others in order to reach a morally optimal outcome. How much justice we should bring about simply depends on how costly the trade-off against other values is (Cohen 2008, p. 327). For pluralists, then, the 'problem' of moral costs is actually not a problem at all. In addition, if we do not know that full justice is desirable (given the necessary costs), we seem to have no reason to assume that there must be feasible institutional arrangements that can assure this outcome. Advocates of the pluralist view should therefore reject not only the proposed moral cost-constraint but also the institutional constraint on justice.
It is only when perceived in the 'all-things-considered' sense that we have reasons to assume that justice, under ideal circumstances, must be possible to pursue without significant moral costs (see Gilabert 2011, p. 57). Yet, a closer look upon this reveals something surprising: if justice is all-things-considered, it seems as if the problem of moral costs either dissolves or is captured by the institutional constraint. If justice denotes the optimal trade-off between some values, say freedom and equality, any infringement on one of these values becomes a conflict internal to justice itself. If the realization of one of aspect of justice (say, a certain distribution of primary goods) would require a violation of another aspect (say, everyone's equal freedom), the proposed conception of justice is infeasible. One may think that this shows that the moral costs of assuring compliance with justice, contrary to what I have argued, actually constrains the feasibility of an ideal. But this would be a mistake, since what it really shows is that full justice is not merely too costly, but impossible to bring about. The moral costs of assuring compliance are no longer costs of assuring compliance with justice, but only with one aspect of justice. If assuring compliance with one aspect of an ideal requires violation of another of its aspects (in ideal circumstances), then the ideal is clearly infeasible.
Since, in this imagined scenario, no institutional arrangement can realize full justice, the proposed moral cost-constraint thus turns out to be an institutional constraint.
In order for the problem of moral costs to arise, we must assume that assuring compliance requires a violation of some value that is not included in the relevant trade-off constitutive of justice. But whatever value that would be, its violation is a bullet which defenders of the all-things-considered account should happily bite. If justice (in ideal circumstances) trumps other values, which should be presumed on an all-things-considered account (why bother to make the trade-off constitutive of justice if only to then trade this off against other values?), this value, whatever it is, will simply be overridden. If the problem of moral costs is not a feasibility problem, it seems as if it is no problem at all. 17 Feasibility, not Fact-Sensitivity I have argued that what one could mistakenly think of as a moral cost-constraint is actually an instance of the institutional constraint which occurs if we hold that principles of justice provide all-things-considered recommendations for political institutions. This section anticipates and responds to a possible objection to this argument. The objection states that the problem of moral costs is not limited to the all-things-considered account of justice. Even on the pluralist account, we should reject ideals that are too costly to realize since principles of justice are justified only when the effects of applying them to the world coincide with our considered convictions about justice. Contrary to what I have argued, the problem of moral costs is not a matter of feasibility, but instead concerns the fact-sensitivity of normative principles and ideals. While the former may be treated as a secondary matter external to justice itself, the latter determines the normative status of principles and must therefore be incorporated in every serious attempt to assess a proposed principle of justice (see Farrelly 2007, p. 844;Hall 2013, p. 177).
The distinction between fact-sensitive and fact-insensitive metaethical approaches concerns our reasons for affirming principles. According to the factsensitive approach, principles of justice are justified only in light of the facts of the world (Rawls 1999, p. 137;Cohen 2008, p. 259). In order to know if we shall affirm a proposed principle, then, we must consider its effects when applied to the world as we know it (see Jubb 2009, p. 345;Hall 2013, p. 176). Let us, for the sake of argument, assume that this is true. Even so, I think that the objection fails, since to consider the effects of a principle here means to consider the effects of complying with the principle. This is clearly shown by Rawls's defence of the difference 17 Geoffrey Brennan suggests that what we shall do always is a matter of optimization. On the optimizing view, he argues, 'claims about costs are claims that invoke both feasibility and desirability considerations in equal measure' (Brennan 2013, p. 328). As shown by my argument above, this conclusion strikes me as mistaken. If what we shall do is a matter of optimization of different values, the problem of costs will neither affect the feasibility, nor the desirability of the values involved. The fact that we should not realize X because doing so means that we would have to forego Y neither shows that X is infeasible, nor that it is undesirable. Rather, what it shows is that not-Y is undesirable, and more so than not-X. If, on the other hand, a proposed trade-off between X and Y would be too costly in terms of either X or Y (or Z), this only shows that the trade-off is suboptimal. principle, in response to which Cohen formulates his fact-insensitivity thesis. The difference principle states that inequalities are just if (and only if) they benefit the worst-off. Applying this principle to the world as we know it will, Rawls asserts, produces a fairly egalitarian distribution of primary goods. The distribution of natural endowments and the workings of open markets will, as a matter of fact, prevent a concentration of resources in the hands of the few (Rawls 1999, p. 137). Though we can easily see how the difference principle could come to justify great inequalities if applied to a world where markets were working differently, or where natural assets were distributed in a different way, such counterfactual considerations have-according to defenders of the fact-sensitive approach-no implications for its status as a fundamental principle of justice 18 (Rawls 1999, p. 137;Valentini 2012, p. 658;Miller 2013, p. 23, 26;cf. Cohen 2008, p. 259). That the fact-sensitive approach asks us to consider the effects of complying rather than not complying with a principle is no coincidence. Since to consider the effects of not, or only partially, complying with a principle is always also to consider the effects of complying with a different principle, the assumption of full compliance is necessary in order for us to know which principle we are assessing (see Simmons 2010, pp. 8-9). The moral cost-constraint states that if the costs of assuring compliance with a principle are too high, the principle must be rejected as a principle of justice. But since the fact-sensitive approach assumes full compliance with the principle subject to evaluation, it fails to recognize any such costs. Hence, fact-sensitivity provides no reason to affirm the proposed moral cost-constraint. 19 To further clarify the distinction between feasibility and fact-sensitivity, think of two possible objections to the socialist principle 'from each according to his ability, to each according to his need'. The first objection states that no feasible institutional arrangement can produce this distribution. The reason why this is so is largely unimportant, but let us assume that there are no secure ways of measuring abilities and needs, and that people will constantly underrate their abilities and exaggerate their needs. The ideal the socialist principle prescribes would then be unachievable through the means available through institutional arrangements, and this would lead advocates of the institutional constraint to reject it as a principle of justice. The second objection instead concerns the ability of the socialist principle to capture our considered convictions about justice. Suppose that the socialist principle matches our considered convictions about justice when applied to the world as we know it.
Suppose further that if, in a different world, a few extremely needy persons' needs would enslave the rest of the population, making them work at the top of their capacity for only minimal compensation, the socialist principle would strike us as unjust. In this case, advocates of the fact-insensitive approach would reject the socialist principle as a fundamental principle of justice, while defenders of the factsensitive approach could still affirm it as such. Note further that none of these views would be committed to reject the socialist principle as an applied (non-fundamental) principle of justice, given the prevailing factual circumstances. Both could agree that socialist justice is a normative ideal for us here and now, though defenders of the fact-insensitivity thesis would insist that it could not tell us the whole truth about justice. If we instead assume that the needy few would swallow all resources in this world, and, as a result, the socialist principle would not capture our considered convictions about justice here and now, the fact-insensitive approach would still not allow us to affirm socialist justice only because there could be another world where the outcome would provide the relevant match. Fact-insensitivity about grounding is not about denying the normative importance of the facts of the world, it is simply to say that there is something beyond the applied principles that we affirm in light of these facts (Cohen 2008, pp. 279-280).
The disagreement between advocates of fact-sensitive and fact-insensitive approaches to justice, then, has nothing to do with the costs of assuring compliance. This, I have argued, is a feasibility constraint which appears if we assume that justice should provide all-things-considered recommendations for political institutions (cf. Jubb 2012, p. 238). Though justice as all-things-considered is likely to attract more defenders of fact-sensitive than of fact-insensitive approaches, I see no reason to think that advocates of the fact-sensitive approach must be committed to this view.
Four Metatheorethical Positions
I have argued that foreseeable noncompliance gives rise to three possible feasibility constraints on the concept of justice: the ability constraint, the motivational constraint and the institutional constraint. While the three constraints have so far been considered in isolation, it is now time to take a closer look at the relation between them. I will argue that the three constraints constitute a sequence in the sense that accepting the motivational constraint presupposes that we accept the ability constraint, and to accept the institutional constraint presupposes that we accept both previous constraints. If we add the possibility of rejecting the feasibility constraint on justice altogether, we get four metatheoretical positions that theorists could take vis-à-vis the feasibility constraint on justice. Depending on which position we adopt, we will be inclined to accept or reject proposed principles according to their demandingness. To illustrate this, the figure below borrows Estlund's terms 'impossible', 'hopeless' and 'hopeful' theory, which depict a falling scale of demandingness. (Estlund 2008, Chap. XIV) To these, I have added my own category 'enforceable' theory, signalling the moral significance of coercive institutions capable of assuring compliance with justice emphasized by the institutional constraint. Whether or not a certain constraint will actually produce the corresponding kind of theory is ultimately an empirical matter. To affirm one of the four metatheoretical positions should thus be understood as granting the acceptability of the corresponding kind of theory, rather than stating that principles of justice are or should be of the specified kind. First, we could reject the feasibility constraint on justice altogether. In this case, we allow for impossible theory, that is, theory which promotes principles that we have no prospects of complying with. Of course, nothing says that rejecting the feasibility constraint will actually lead us to affirm this kind of theory. This depends on what the correct principles of justice, on the one hand, and the world, on the other, look like. All it says is that whether or not we are ever able to realize a proposed ideal or meet a proposed standard of justice is irrelevant for assessing the status of this ideal or standard.
Second, we could accept only the ability constraint on justice. To recapitulate, the ability constraint states that justice cannot demand that we perform acts which we could not perform, even if we were to try and not give up. This is the narrowest of the three interpretations of the feasibility constraint on justice. To accept the ability constraint but reject the two other constraints signals that we are prepared to accept hopeless theory, that is theory which falls short of demanding the impossible, but whose standards we still have good reason to believe that we will never meet (Estlund 2008, p. 264). Again, whether or not affirming the ability constraint will actually lead us to adopt this kind of standards is an empirical matter. But the mere fact that a proposed standard is hopeless will, on this account, not lead us to conclude that the principle behind it must be rejected.
Third, we could opt for the understanding of the feasibility constraint on justice that I have called the motivational constraint. This states that justice cannot demand that we perform acts which we, due to our human nature, cannot bring ourselves to perform. Now, if I am not able to do something, I will also not be able to bring myself to do it (at the very best, I could bring myself to try). To accept the motivational constraint, then, seems to presuppose that we also accept the ability constraint. To affirm these two constraints signals that we are not prepared to settle for anything less than hopeful theory, that is theory whose standards it is likely that we will be able to meet (Estlund 2008, p. 267). Still, whether or not this will actually result in a more concessive standard of justice than the previous constraints is ultimately an empirical question. At the end of the day, humans may turn out to be able to bring themselves to do all the things that they could do if they were to try and not give up. If so, the ability and motivational constraints will accept and reject exactly the same standards of justice, albeit for different reasons.
Last, we may affirm what I have called the institutional constraint on justice. If we do, we are concerned not only with the possibility and probability of meeting a certain standard of justice, but also with the standard's enforceability through the creation of coercive institutions. To affirm the institutional constraint seems to presuppose that we accept the ability and motivational constraints: our ability to make others do things seems to depend on them being able both to do and to bring themselves to do them. 20 But, one may object, there may also be things that human beings can only do or bring themselves to do if incentivizing institutions are in place. Institutional arrangements may affect people's courage, creativity and endurance in ways that go beyond the individuals' control. Does not the fact that coercive institutions could enhance people's physical, mental or motivational capacities show that the institutional constraint may also produce a more demanding standard of justice than the ability and motivational constraints? No, it does not, at least not on the reading I propose here. On this reading, to reject the institutional constraint is not to say that justice must be achievable without coercive institutions (nobody, I think, would defend such a view), but only that justice must be achievable through the means provided by institutional arrangements or otherwise. What the institutional constraint adds is the assumption that the lack of coercive institutions that could ensure (a sufficient level of) compliance with a proposed principle is itself a reason to reject it as a principle of justice; hence enforceable theory. On this account, justice cannot demand of us to be as virtuous as we could (bring ourselves to) be, unless there are institutional arrangements that could render sufficiently sure that others will make a similar effort. If we can make people do all the things that they can do or will to do, the institutional constraint will produce the same standard of justice as the previous constraints. Yet, if there are things that we cannot make others do simply because they do not want to do them, the institutional constraint will produce a more concessive standard of justice than these.
Conclusions I have argued that three different understandings of what 'ought implies can' means in political settings give rise to three possible feasibility constraints on the concept of justice: the ability constraint, the motivational constraint and the institutional constraint. Each of these places separate demands on proposed principles of justice. Affirming or rejecting the various constraints reveals our commitment to one out of four metatheoretical positions. Ranked according to demandingness (from highest to lowest), these allow for a conception of justice to promote impossible, hopeless, hopeful or only enforceable normative standards. Which constraints we affirm is likely to depend on how 'practical' we hold the ideal of justice to be. If we think that justice must provide action-guiding recommendations for political agents in the real world, we tend to affirm more constraints than if we think that justice is an abstract philosophical ideal which may or may not have practical implications. Which, if any, of the constraints should we then affirm? This is a crucial question, and one that I have made no attempt to answer here. To do so would require an argument about the nature of normative concepts in general, and of justice in particular, problems too complex to be addressed within the scope of this paper. Still, I hope that the present discussion will facilitate future studies of this and related topics. My proposed distinction between the three feasibility constraints contributes to this purpose in at least three ways. First, it provides conceptual clarity and prevents misunderstandings. Second, it allows a shift in focus from the debate for or against the feasibility constraint on justice towards the pressing issue of what such a constraint would actually rule out. While the ability constraint only rules out principles that demand the strictly impossible, the motivational constraint also rules out demands which it is possible, yet highly improbable, that we fulfil. These two constraints invite us to think hard about the limits of human capacities; physical, mental and motivational. By contrast, the institutional constraint directs our thoughts towards the process of shaping and enforcing certain desired behaviours. As should be clear from the previous discussion, these matters are closely intertwined: which behaviours we could ensure through the creation of institutions depends on human capacities, and whether or not we will be able to fully exercise those capacities may depend on which institutional arrangements are available to us. Yet, once we see that the three constraints place different demands on principles of justice, we get a clearer picture of our reasons for rejecting a proposed principle on feasibility grounds. Third, the distinction may prove useful also in a broader context. Even if we reject all three feasibility constraints as constraints on the concept of justice, we may find them helpful for thinking about what justice may require at the level of (shorter-or longer-term) application. Though my concern here has been exclusively with constraints on first, or ideal, principles of justice, nothing prevents us from employing different constraints at different levels of analysis once we shift our focus from the ideal to the nonideal. | 14,383.6 | 2017-11-01T00:00:00.000 | [
"Philosophy"
] |
Exclusive rare Higgs decays into lepton pair and light mesons
Exclusive rare Higgs decays into lepton pair plus one light hadron, such as $h\to \rho^0(\omega)\ell\bar{\ell}$, $h\to \pi^0\ell\bar{\ell}$, $h\to \pi^+ (K^+)\ell^-\bar{\nu}_\ell$, and $h\to \rho^+(K^{*+})\ell^-\bar{\nu}_\ell$, have been explored in the standard model. Decay amplitudes are dominantly from the Higgs couplings to gauge bosons and to charged leptons, and their branching ratios are predicted in the range of $10^{-8}\sim 10^{-5}$. We have also analyzed the differential dilepton invariant mass and angular distributions of $h\to \rho^0 \ell^+\ell^-$ decays. It will be challenging to search for these rare processes. Nevertheless, experimental studies of them, in particular, $h\to \rho^0\ell^+\ell^-$ with $\ell=e,\mu$, might be interesting both to help deepen our understanding of the standard model and to probe new physics beyond the standard model in the future high-precision experiments.
Introduction
The discovery of the 125 GeV Higgs boson by the ATLAS and CMS collaborations [1] at the CERN Large Hadron Collider (LHC) in 2012 was a major breakthrough in particle physics, which completes the standard model (SM) and opens up a new era of the precise determination of the properties of this new particle as well. So far, experimental studies of the Higgs boson couplings to the SM fields [2,3] show no significant deviations from the SM predictions. Nevertheless, it is conceivable that much more detailed investigations both theoretically and experimentally, may help to reveal the non-standard properties of the particle, which would be very useful to increase our understanding of Higgs dynamics.
Decays of the Higgs boson into gauge bosons including h → γγ, h → ZZ * , and h → W W * , play important roles in the discovery of the particle. In addition to improving the measurements of these modes, exclusive rare Higgs decays would be also very interesting at the future high energy experimental facilities, such as the high-luminosity LHC and highenergy LHC, Higgs factory, even 100 TeV proton-proton collider, in which one could have a large sample of the Higgs particle. Actually, some types of these decays have been studied theoretically and experimentally, like h → V γ decays [4,5,6,7,8] with V denoting vector mesons ρ, φ, J/ψ etc., and h → V Z decays [9,10,11,12,13,14], as well as leptonic final states processes h → γℓ + ℓ − decays [15,16,17,18,19,20,21,22,23,24,25,26], h → V ℓl decays (V = Υ, J/ψ, φ) [27,28], and h → η c,b ℓ + ℓ − decays [29]. In the SM these transitions have small branching fractions, the experimental study of them is generally a difficult task. Searches for these rare processes however may potentially probe the novel Higgs couplings in the case that their decay rates could be enhanced in some new scenarios beyond the SM.
In the present paper we will focus on rare Higgs decays into lepton pair plus one light hadron containing ρ, ω, and π mesons. In the SM, these processes, such as h → V ℓl (V = ρ 0 , ω) decays, will get contribution from the diagrams, as shown in Figure 1, in which the Higgs boson couples to fermions and to the gauge bosons including Z and γ. Interaction vertices of Higgs couplings to the SM fermions will be strongly suppressed for the electron and light quarks (u and d) since they are proportional to m f /v (v = 246 GeV is the vacuum expectation value of the Higgs field). For this reason we do not consider contributions generated from the Higgs coupling to u,d quarks, and the second diagram of Figure 1 can be neglected for the electron mode.
Higgs decays into a heavy meson plus lepton pair have been analyzed in Refs. [27,28,29], and in the SM these branching ratios have been given as 10 −6 ∼ 10 −8 . It will be shown below that, for the light meson final states, their branching ratios are also around this range, the present work thus provides some complementary information for these previous studies. On the other hand, experimentally heavy quarkonia will be in general reconstructed via leptonic decays into muon pairs with relative small rates: Br(J/ψ → µ + µ − ) = (5.961 ± 0.033)% and Br(Υ → µ + µ − ) = (2.48 ± 0.05)% [30]; while for light mesons, ρ 0 decays almost exclusively to π + π − and ω has a large rate into π + π − π 0 in the event reconstruction. Furthermore, since the contribution by the Higgs coupling to light quarks is negligible, most nonperturbative effects in our calculation, depicted in Figure 1 where ǫ * µ is polarization vector of the vector meson V . f V is its decay constant, which can be extracted from the measured V → e + e − width. However, this is not the case for heavy meson final states, in which more nonperturbative information, such as the light-cone distribution amplitudes of J/ψ and Υ, would have to be involved in order to evaluate the contribution generated from Figure 1(a) of Ref. [27], due to the Higgs couplings to heavy quarks. This paper is organized as follows. In the next section, we present a detailed derivation of decay amplitudes. Section 3 is our numerical analysis, including calculations of branching ratios and studies of differential decay rates. We summarize our results and give some outlooks in Section 4.
Decay amplitudes
It is easy to see that, the diagram in Figure 1(a) contains the couplings of the Higgs boson to a pair of neutral gauge bosons ZZ, Zγ, and γγ, which in turn are converted to a lepton pair and to a qq pair via the neutral current interactions and Here e is the QED coupling constant, g is the SU(2) L coupling constant, θ W is the Weinberg angle, and f denotes fermions including leptons and quarks. Also 156.2 ± 0.7 Table 1: Hadronic input parameters for light mesons. The values of f V /P 's are taken from Ref. [7].
where Q f is the charge, and T f 3 is the third component of the weak isospin of the fermion. The qq pair then hadronizes into the vector meson ρ or ω.
In the SM, Z-boson can couple to the Higgs boson at the tree-level, and the hZZ vertex is written as However, the leading-order SM hZγ and hγγ interactions are induced by one-loop diagrams involving W -boson or heavy charged fermions like top-quark, and their explicit expressions can be found in Ref. [31]. On the other hand, one can write down the effective lagrangian for the hZγ and hγγ couplings generated in the SM as follows [32] L hZγ eff = eg and where dimensionless coefficients C Zγ and C γγ can be thought of as the effective couplings. Of course, for the general effective interactions beyond the SM, some new structures other than eqs. (6) and (7) will appear, which have been analyzed in Refs. [19,7,13]. Now let us go into the decay amplitudes. For the h → V ℓ + ℓ − transitions, direct calculation from Figure 1 will give with parameterizing the Z pole effect, where k 1 , k 2 and p represent the momentum of ℓ − , ℓ + and ρ in the final states, respectively. q 2 = (k 1 + k 2 ) 2 denotes the lepton pair mass squared, and G V and Q V are listed in Table 1. Eq. (1) and the vertex of the Higgs coupling to charged leptons have been used in deriving eq. (8). It is obvious that the virtual Z contribution from the diagram (b) of Figure 1 is strongly suppressed, which has been neglected in eq. (8). Similarly, for h → Vνν decays, only diagram (a) containing hZZ and hZγ vertices can contribute, and the amplitudes can be read from the first two lines in eq. (8). Note that g ν V = g ν A = 1/2 for neutrino final states.
By squaring the amplitude and summing over the polarizations of final particles, the differential decay rate of h → V ℓ + ℓ − can be obtained as is the angle between the three-momentum of Higgs boson and the three-momentum of ℓ − in the dilepton rest frame, and the phase space is given by Let us further analyze h → π 0l ℓ modes with ℓ denoting charge leptons or neutrinos. In the SM, these processes will get dominant contributions through hZZ vertex at the tree level, and their amplitude can be directly written as where p is the momentum of the neutral pion. Similar diagrams like Figure 1(b), in which π 0 is converted through the virtual Z, can also lead to the transition involving charged lepton final states at the tree level. However, it is easy to see that these diagrams will be suppressed by m 2 ℓ /m 2 Z , comparing with eq. (13), which thus could be negligible. To be complete, we also check the lowest order dominant contribution to h → P + ℓ −ν ℓ (P = π, K) and h → V + ℓ −ν ℓ (V = ρ, K * ) decays, which are generated by the Higgs coupling to W -boson (hW + W − ) vertex in the SM. As mentioned above, now we also neglect those strongly suppressed diagrams like Fig. 1(b), in which Z-boson is replaced by W -boson. Thus their decay amplitudes can be easily found to be and with parameterizing the W pole effect. Here V ij denotes the relevant CKM matrix element, which is equal to V ud for π + and ρ + , and V us for K + and K * + , respectively.
Similar to the case of h → V ℓ + ℓ − decays, as shown in eq. (11), it is easy to derive the differential decay rates of the above other processes.
Numerical analysis
To illustrate the numerical results for branching fractions of exclusive Higgs decays to lepton pair plus light mesons, we normalize these decay rate to the theoretical prediction for the total Higgs width in the SM, Γ h = 4.10 MeV, referring to m h = 125.09 GeV [33]. Thus for h → Vlℓ with V = ρ 0 and ω, it is straightforward to obtain for charge lepton final states, and for neutrino final states where a factor 3 has been included in the calculation due to the neutrino flavors.
Furthermore, by taking the values of decay constants from Ref. [7], as shown in Table 1, we get and for ℓ = e, µ, and τ , respectively. The errors of our predictions are due to the uncertainties of the decay constants of light mesons listed in Table 1.
One can find that, for different charged lepton flavors (e,µ, and τ ), the decay rates of h → P + (V + )ℓ −ν ℓ are degenerate, and those of h → π 0 ℓ + ℓ − are almost degenerate. This is easily understood because the former is dominated by the virtual h → W + W − transition and the latter is by the virtual h → ZZ vertex. However, it is a different story for h → V ℓ + ℓ − processes. The modes with τ pairs h → ρ 0 (ω)τ + τ − are predicted with larger rate while the channels with muons and electrons have rates suppressed about by a factor 40, and In the present case, as shown in Figure 1, all of the ZZ, Zγ, γγ and ℓ + ℓ − intermediate states can give contributions.
In order to explicitly observe their roles for different charged lepton final states, let us take h → ρ 0 ℓ + ℓ − as examples, and the dilepton invariant mass distributions of these decays have been displayed in Figure 2 for ℓ = e, µ, and τ , respectively. Different types of contributions are plotted separately for comparison. From these three plots, one can readily find that, the contribution from the Higgs coupling to leptons [ Figure 1(b)] is vanishingly small for the electron mode, and could be relevant in the muon case; while it is dominant, giving about 99% contribution, thus leads to larger rate for the τ + τ − final state. Other contributions are strongly suppressed in the process h → ρ 0 τ + τ − .
On the contrary, for ℓ = e and µ, both of them are dominated by Zγ, γγ, and ZZ intermediate states via the γ * /Z poles, corresponding to the two peaks in above plots. Obviously, γγ plays the dominant role in the low dilepton mass region. When increasing the dilepton mass, other contributions from Zγ and ZZ will become dominant but Zγ gives the more important one. At the large dilepton mass region after the Z pole, the contribution from ℓ + ℓ − intermediate state [ Figure 1(b) via the virtual photon] will be relevant for the muon mode. Further detailed numerical analysis shows that hZZ and hZγ give almost the same contributions to both modes. However, hγγ and the Higgs coupling to leptons play different roles in these two processes. The electron mode can obtain the relative large contribution from hγγ in the very low dilepton mass region due to the smallness of the electron mass, and as discussed already, a significant contribution from Figure 1(b) at the high dilepton mass region could be expected in h → ρ 0 µ + µ − decay, which thus leads to the ratio in eq. (33). In addition, it is interesting to note that, in the SM, the hZγ and hγγ couplings are loop-induced while hZZ and the Higgs coupling to leptons are the tree-level vertices. This seems to indicate that h → ρ 0 ℓ + ℓ − with ℓ = e and µ could be sensitive to the short-distance physics and studies of these decays may help to probe the novel dynamics in the Higgs sector.
It is noticed that, besides the dilepton invariant mass distributions, one can further examine the differential angular distributions of h → ρ 0 ℓ + ℓ − decays by integrating over s in eq. (11). The resulting distributions with respect to cos θ (ℓ) are given in Figure 3 for ℓ = e, µ, and τ , respectively. Similar to the case of the invariant mass distribution, the contribution from Figure 1(b) almost saturates the angular differential decay rate in h → ρ 0 τ + τ − , can be negligible in h → ρ 0 e + e − , and is not very large but non-negligible for the muon mode. It is easy to see that, the angular distribution is symmetric for cos θ (ℓ) ↔ − cos θ (ℓ) , therefore We also study the energy spectrum of the processes involving neutrino final states. Now the lepton pair mass squared q 2 = m 2 h −2m h E ρ +m 2 ρ , where E ρ denotes the energy of ρ meson in the rest frame of Higgs boson. Using eq. (12), we have m ρ ≤ E ρ ≤ (m 2 h + m 2 ρ )/2m h . Thus the normalized energy spectrum of h → ρ 0 νν decay with respect to E ρ can be plotted, which is displayed in Figure 4. As mentioned above, this channel gets the dominant contribution from hZZ and hZγ transitions. It is shown that the plot has a peak when E ρ ∼ 30 GeV. The reason for this is that √ q 2 is close to m Z and the virtual Z boson can be on-shell around this region.
From eqs. (17) − (24), it is found that decay rates of h → ωℓ + ℓ − and h → ωνν are suppressed about one order of magnitude, compared to those of ρ 0 meson's. However, it is believed that one will achieve the similar behaviors as above if we also perform the dilepton invariant mass and angular distribution analysis of these decays.
Summary and outlook
We have investigated exclusive rare Higgs decays including h → ρ 0 (ω)ℓl, h → π 0 ℓl, h → π + (K + )ℓ −ν ℓ , and h → ρ + (K * + )ℓ −ν ℓ in the SM. Decay rates of these modes have been calculated, and their branching ratios are predicted around 10 −5 ∼ 10 −8 , which can be compared to h → J/ψ(Υ)ℓl decays. Experimental observations of these rare processes are generally challenging, which, however, might be interesting in the future high energy and high-precision experiments, in order both to test the SM and to search for new physics (NP) beyond the SM.
We have presented a detailed analysis of the differential dilepton invariant mass distributions of h → ρ 0 ℓ + ℓ − decays. In the SM, decay amplitudes of these channels are governed by Higgs couplings to gauge bosons (hZZ, hZγ, and hγγ vertices) and couplings to leptons (hℓl vertex). It has been shown that the contribution from the hℓl vertex is completely dominant in h → ρ 0 τ + τ − and negligible in h → ρ 0 e + e − . But all types of vertices can play significant roles in h → ρ 0 µ + µ − . In the SM, the hZZ and hℓl couplings exist at the tree level while the hγγ and hZγ vertices are loop-induced and hence suppressed. However, due to the smallness of the light vector meson mass, the photon propagator will be almost onshell, which thus counteracts the loop suppression. As a result, the hZZ, hγγ, and hZγ couplings are of similar importance for the e + e − and µ + µ − final states. Particularly, the contribution from the hZγ vertex even dominates over that from the hZZ interaction in the large dilepton mass region, as displayed in the plots of Figure 2. Thus the deviation from the SM prediction may be observed in h → ρ 0 ℓ + ℓ − if NP scenarios can give rise to significant enhancement to the hZγ coupling.
From recent measurements by ATLAS and CMS Collaborations [3], it is known that the Higgs couplings to the SM gauge bosons and fermions agree with the SM predictions within experimental and theoretical uncertainties. In particular, Higgs boson decays like h → ZZ * [34], h → γγ [35], and h → µ + µ − [36] have been well studied and their rates are consistent with the SM expectations at or below the O(20%) level. However, the channel h → Zγ has not been detected yet. The most recent search for this mode at the LHC comes from ATLAS Collaboration, and the current strongest upper bound on its decay rate is set at 3.6 times the SM value [37]. This means that the present experimental limit allows substantial room for NP in h → Zγ decay.
Some interesting NP models have recently been constructed [38], in which the large contributions to h → Zγ can be generated, close to the current experimental upper bound, without in conflict with all other Higgs measurements. Accordingly, significant enhancement to decay rates of h → ρ 0 ℓl could be expected. However, as pointed out by the authors of Ref. [38], this can only occur in some complicated NP sectors. In general, it is impossible to achieve the above goal in simple models. Thus it is reasonable to anticipate that, with the accumulation of more experimental data, the decay rate of h → Zγ would be observed close to the SM expectation at or below O(20%) level eventually, like that of other Higgs decays.
On the other hand, even in the case that the NP effects are around the ten percent level, three-body processes h → ρ 0 ℓ + ℓ − (ℓ = e, µ), or experimentally four-body h → (π + π − ) ρ ℓ + ℓ − , may offer nontrivial invariant mass and angular distributions, which would provide interesting information on short-distance dynamics. Therefore, if new structures other than eqs. (6) and (7), for instance, the parity-odd term like ǫ µναβ hZ µν F αβ , appear in NP scenarios, some useful observables such as forward-backward asymmetries or other asymmetries, generated from the interference between new amplitudes and the SM ones, will come out in the angular analysis. This is similar to angular studies of h → Zℓ + ℓ − decays in Refs. [39,40]. Such detailed analysis of the h → (π + π − ) ρ ℓ + ℓ − angular distribution is beyond the scope of the present paper, which is left for a future separate publication. | 4,858 | 2021-03-04T00:00:00.000 | [
"Physics"
] |
On Phase Transitions to Cooperation in the Prisoner's Dilemma
Game theory formalizes certain interactions between physical particles or between living beings in biology, sociology, and economics, and quantifies the outcomes by payoffs. The prisoner's dilemma (PD) describes situations in which it is profitable if everybody cooperates rather than defects (free-rides or cheats), but as cooperation is risky and defection is tempting, the expected outcome is defection. Nevertheless, some biological and social mechanisms can support cooperation by effectively transforming the payoffs. Here, we study the related phase transitions, which can be of first order (discontinous) or of second order (continuous), implying a variety of different routes to cooperation. After classifying the transitions into cases of equilibrium displacement, equilibrium selection, and equilibrium creation, we show that a transition to cooperation may take place even if the stationary states and the eigenvalues of the replicator equation for the PD stay unchanged. Our example is based on adaptive group pressure, which makes the payoffs dependent on the endogeneous dynamics in the population. The resulting bistability can invert the expected outcome in favor of cooperation.
Game theory formalizes certain interactions between physical particles or between living beings in biology, sociology, and economics, and quantifies the outcomes by payoffs. The prisoner's dilemma (PD) describes situations in which it is profitable if everybody cooperates rather than defects (freerides or cheats), but as cooperation is risky and defection is tempting, the expected outcome is defection. Nevertheless, some biological and social mechanisms can support cooperation by effectively transforming the payoffs. Here, we study the related phase transitions, which can be of first order (discontinous) or of second order (continuous), implying a variety of different routes to cooperation. After classifying the transitions into cases of equilibrium displacement, equilibrium selection, and equilibrium creation, we show that a transition to cooperation may take place even if the stationary states and the eigenvalues of the replicator equation for the PD stay unchanged. Our example is based on adaptive group pressure, which makes the payoffs dependent on the endogeneous dynamics in the population. The resulting bistability can invert the expected outcome in favor of cooperation.
When two entities characterized by the states, "strategies", or "behaviors" i and j interact with each other, game theory formalizes the result by payoffs P ij , and the structure of the payoff matrix (P ij ) determines the kind of the game. The dynamics of a system of such entities is often delineated by the so-called replicator equations dp(i, t) dt = p(i, t) j P ij p(j, t) − j,l p(l, t)P lj p(j, t) (1) [3]. p(i, t) represents the relative frequency of behavior i in the system, which increases when the expected "success" F i = j P ij p(j, t) exceeds the average one, i F i p(i, t). Many collective phenomena in physics such as agglomeration or segregation phenomena can be studied in a game-theoretical way [5,6]. Applications also include the theory of evolution [10] and the study of ecosystems [11]. Another exciting research field is the study of mechanisms supporting the cooperation between selfish individuals [1][2][3] in situations like the "prisoner's dilemma" or public goods game, where they would usually defect (free-ride or cheat). Contributing to public goods and sharing them constitute ubiquitous situations, where cooperation is crucial, for example, in order to maintain a sustainable use of natural resources or a well-functioning health or social security system.
In the following, we will give an overview of the sta-tionary solutions of the replicator equations (1) and their stability properties. Based on this, we will discuss several "routes to cooperation", which transform the prisoner's dilemma into other games via different sequences of continuous or discontinuous phase transitions. These routes will then be connected to different biological or social mechanisms accomplishing such phase transitions [12]. Finally, we will introduce the concept of "equilibrium creation" and distinguish it from routes to cooperation based on "equilibrium selection" or "equilibrium displacement". A new cooperation-promoting mechanism based on adaptive group pressure will exemplify it. Stability properties of different games. Studying games with two strategies i only, the replicator equations (1) simplify, and we remain with where p(t) = p(1, t) represents the fraction of cooperators and 1 − p(t) = p(2, t) the fraction of defectors. λ 1 = P 12 − P 22 and λ 2 = P 21 − P 11 are the eigenvalues of the two stationary solutions p = p 1 = 0 and p = p 2 = 1.
For the sake of our discussion, we imagine an additional fluctuation term ξ(t) on the right-hand-side of Eq. (2), reflecting small perturbations of the strategy distribution. Four different cases can be classified [3]: (1) If λ 1 < 0 and λ 2 > 0, the stationary solution p 1 corresponding to defection by everybody is stable, while the stationary solution p 2 corresponding to cooperation by everyone is unstable. That is, any small perturbation will drive the system away from full cooperation towards full defection. This situation applies to the prisoner's dilemma (PD) defined by payoffs with P 21 > P 11 > P 22 > P 12 . According to this, strategy i = 1 ("cooperation") is risky, as it can yield the lowest payoff P 12 , while strategy i = 2 ("defection") is tempting, since it can give the highest payoff P 21 . (2) If λ 1 > 0 and λ 2 < 0, the stationary solution p 1 is unstable, while p 2 is stable. This means that the system will end up with cooperation by everybody. Such a situation occurs for the so-called harmony game (HG) with P 11 > P 21 > P 12 > P 22 , as mutual cooperation gives the highest payoff P 11 . (3) If λ 1 > 0 and λ 2 > 0, the stationary solutions p 1 and p 2 are unstable, but there exists a third stationary solution p 3 , which turns out to be stable. As a consequence, the system is driven towards a situation, where a fraction p 3 of cooperators is expected to coexist with a fraction (1 − p 3 ) of defectors. Such a situation occurs for the snowdrift game (SD) (also known as hawk-dove or chicken game). This game is characterized by P 21 > P 11 > P 12 > P 22 and assumes that unilateral defection is tempting, as it yields the highest payoff P 21 , but also risky, as mutual defection gives the lowest payoff P 22 . (4) If λ 1 < 0 and λ 2 < 0, the stationary solutions p 1 and p 2 are both stable, while the stationary solution p 3 is unstable. As a consequence, full cooperation is possible, but not guaranteed. In fact, the final state of the system depends on the initial condition p(0) (the "history"): If p(0) < p 3 , the system is expected to end up in the stationary solution p 1 , i.e. with full defection. If p(0) > p 3 , the system is expected to move towards p 2 = 1, corresponding to cooperation by everybody. The history-dependence implies that the system is multistable (here: bistable), as it has several (locally) stable solutions. This case is found for the stag hunt game (SH) (also called assurance). This game is characterized by P 11 > P 21 > P 22 > P 12 , i.e. cooperation is rewarding, as it gives the highest payoff P 11 in case of mutual cooperation, but it is also risky, as it yields the lowest payoff P 12 , if the interaction partner is uncooperative.
Phase transitions and routes to cooperation. When facing a prisoner's dilemma, it is of vital interest to transform the payoffs in such a way that cooperation between individuals is supported. Starting with the payoffs P 0 ij of a prisoner's dilemma, one can reach different payoffs P ij , for example, by introducing strategydependent taxes T ij = P 0 ij − P ij > 0. When increasing the taxes T ij from 0 to T 0 ij , the eigenvalues will change from λ 0 1 = P 0 12 − P 0 22 and λ 0 2 = P 0 21 − P 0 11 to λ 1 = λ 0 1 + T 22 − T 12 and λ 2 = λ 0 2 + T 11 − T 21 . In this way, one can create a variety of routes to cooperation, which are characterized by different kinds of phase transitions. We define route 1 [PD→HG] by a direct transition from a prisoner's dilemma to a harmony game. It is characterized by a discontinuous transition from a system, in which defection by everybody is stable, to a system, in which cooperation by everybody is stable (see Fig. 1a). Route 2 [PD→SH] is defined by a direct transition from the prisoner's dilemma to a stag hunt game. After the moment t * , where λ 2 changes from positive to negative values, the system behavior becomes history-dependent: When the fluctuations ξ(t) for t > t * exceed the critical threshold p 3 (t) = λ 1 /[λ 1 + λ 2 (t)], the system will experience a sudden transition to cooperation by everybody. Otherwise one will find defection by everyone, as in the prisoner's dilemma (see Fig. 1b). In order to make sure that the perturbations ξ(t) will eventually exceed p 3 (t) and trigger cooperation, the value of λ 2 must be reduced to sufficiently large negative values. It is also possible to have a continuous rather than sudden transition to cooperation: We define route 3 [PD→SD] by a transition from a prisoner's dilemma to a snowdrift game. As λ 1 is changed from negative to positive values, a fraction p 3 (t) = λ 1 (t)/[λ 1 (t) + λ 2 ] of cooperators is expected to result (see Fig. 1c). When increasing λ 1 , this fraction rises continuously. One may also implement more complicated transitions. Route 4, for example, establishes the transition sequence PD→SD→HG (see Fig. 1d), while we define route 5 by the transition PD→SH→HG (see Fig. 1e). One may also implement the transition PD→SD→HG→SH (route 6, see Fig. 1f), establishing a path-dependence, which can guarantee cooperation by everybody in the end. (When using route 2, the system remains in a defective state, if the perturbations do not exceed the critical value p 3 .) Relationship with cooperation-supporting mechanisms. We will now discuss the relationship of the above introduced routes to cooperation with biological and social mechanisms ("rules") promoting the evolution of cooperation. Martin A. Nowak performs his analysis of five such rules with the reasonable specifications T = b > 0, R = b − c > 0, S = −c < 0, and P = 0 in the limit of weak selection [12]. Cooperation is assumed to require a contribution c > 0 and to produce a benefit b > c for the interaction partner, while defection generates no payoff (P = 0). As most mechanisms leave λ 1 or λ = (λ 1 +λ 2 )/2 unchanged, we will now focus on the payoff-dependent parameters λ 1 and λ (rather than λ 1 and λ 2 ). The basic prisoner's dilemma is characterized by λ 0 1 = −c and λ 0 = 0.
According to the Supporting Online Material of Ref. [12], kin selection (genetic relatedness) tranforms the payoffs into P 11 = P 0 11 + r(b − c), P 12 = P 0 12 + br, P 21 = P 0 21 − cr, and P 22 = P 0 22 . Therefore, it leaves λ unchanged and increases λ 1 by T 22 − T 12 = br, where r represents the degree of genetic relatedness. Direct reciprocity (repeated interaction) does not change λ 1 , but it reduces λ by − where w is the probability of a future interaction. Network reciprocity (clustering of individuals playing the same strategy) leaves λ unchanged and increases λ 1 by H(k), where H(k) is a function of the number k of neighbors. Finally, group selection (competition between different populations) increases λ 1 by (b − c)(m − 1), where m is the number of groups, while λ is not modified. However, λ 1 and λ may also change simultaneously. For example, indirect reciprocity (based on trust and reputation) increases λ 1 by cq and reduces λ by − 1 2 (b − c)q < 0, where q quantifies social acquaintanceship.
Summarizing this, kin selection, network reciprocity, and group selection preserve λ = 0 and increase the value of λ 1 (see route 1 in Fig. 2). Direct reciprocity, in contrast, preserves the value of λ 1 and reduces λ (see route 2a in Fig. 2). Indirect reciprocity promotes the same transition (see route 2b in Fig. 2). Supplementary, one can analyze costly punishment. Using the payoff specifications made in the Supporting Information of Ref. [14], costly punishment changes λ by −(β + γ)/2 < 0 and λ 1 by −γ [14], i.e. when γ is increased, the values of λ and λ 1 are simultaneously reduced (see route 2c in Fig. 2). Here, γ > 0 represents the punishment cost invested by a cooperator to impose a punishmet fine β > 0 on a defector, which decreases the payoffs of both interaction partners. Route 3 can be generated by the formation of friendship networks [13]. Route 4 may occur by kin selection, network reciprocity, or group selection, when starting with a prisoner's dilemma with λ 0 < 0 (rather than λ 0 = 0 as assumed before). Route 5 may be generated by the same mechanisms, if λ 0 > 0. Finally, route 6 can be implemented by time-dependent taxation (see Fig. 2).
Further kinds of transitions to cooperation. The routes to cooperation discussed so far change the eigenvalues λ 1 and λ 2 , and leave the stationary solutions p 1 and p 2 unchanged. However, transitions to cooperation can also be 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000 generated by shifting the stationary solutions or creating new ones, as we will show now. For this, we generalize the replicator equation (2) by replacing λ 1 with f (p) and λ with g(p), and by adding a term h(p), which can describe effects of spontaneous transitions like mutations. To guarantee 0 ≤ p(t) ≤ 1, we must have h(p) = v(p) − pw(p) with functions w(p) ≥ v(p) ≥ 0. The resulting equation is dp/dt = F (p(t)) with F p = (1 − p)[f (p) − 2g(p)p]p + h(p), and its stationary solutions p k are given by F (p k ) = (1 − p k )[f (p k ) − 2g(p k )p k ]p k + h(p k ) = 0. The associated eigenvalues λ k = dF (p k )/dp determining the stability of the stationary solutions p k are where we have used the abbreviations are the derivatives of the functions f (p), g(p) and h(p) in the points p = p k .
Classification. We can now distinguish different kinds of transitions from defection to cooperation: If the stationary solutions p 1 = 0 and p 2 = 1 of the prisoner's dilemma are modified, we talk about transitions to cooperation by equilibrium displacement. This case occurs, for example, when random mutations are not weak (h = 0). If the eigenvalues λ 1 or λ 2 of the stationary solutions p 1 = 0 and p 2 = 1 are changed, we speak of equilibrium selection. This case applies to all routes to cooperation discussed before. If a new stationary solution appears, we speak of equilibrium creation. The different cases often appear in combination with each other (see the Summary below). In the following, we will dis-cuss an interesting case, where cooperation occurs solely through equilibrium creation, i.e. the stationary solutions p 1 and p 2 of the replicator equation for the prisoner's dilemma as well as their eigenvalues λ 1 and λ 2 remain unchanged. We illustrate this by the example of an adaptive kind of group pressure that rewards mutual cooperation (T 11 < 0) or sanctions unilateral defection (T 21 > 0). Both, rewarding and sanctioning reduces the value of λ 2 , while λ 1 remains unchanged. Assuming here that the group pressure vanishes, when everybody cooperates (as it is not needed then), while it is maximum when everybody defects (to encourage cooperation) [15], we may set f (p) = λ 0 1 and g(p) . It is obvious that we still have the two stationary solutions p 1 = 0 and p 2 = 1 with the eigenvalues λ 1 = λ 0 1 < 0 and λ 2 = 2λ 0 − λ 0 1 > 0 of the original prisoners dilemma with parameters λ 0 1 and λ 0 2 or λ 0 . However, for large enough values of K [namely for K > K 0 = λ 0 + |λ 0 1 | + |λ 0 1 |(2λ 0 + |λ 0 1 |)], we find two additional stationary solutions p − is an unstable stationary solution with p 1 < p − < p + and λ − = dF (p − )/dp > 0, while p + is a stable stationary solution with p − < p + < p 2 and λ + = dF (p + )/dp < 0 (see inset of Fig. 2). Hence, the assumed dependence of the payoffs on the proportion p of cooperators generates a bistable situation (BISTAB), with the possibility of a coexistence of a few defectors with a large proportion p + of cooperators, given K > K 0 . If p(0) < p − , where p(0) denotes the initial condition, defection by everybody results, while a stationary proportion p + of cooperators is established for p − < p(0) < 1. Surprisingly, in the limit K → ∞, cooperation is established for any initial condition p(0) = 0 (or through fluctuations). Summary. We have discussed from a physical point of view what must happen that social or biological, payoffchanging interaction mechanisms can create cooperation in the prisoner's dilemma. The possible ways are (i) moving the stable stationary solution away from pure defection (routes 3, 4, and 6), (ii) stabilizing the unstable solution (routes 1, 2, 4, 5 and 6), or (iii) creating new stationary solutions, which are stable (routes 3, 4 and 6). Several of these points can be combined. If (i) is fulfilled, we speak of "equilibrium displacement", if their eigenvalues change, we called this "equilbrium selection", and if (iii) is the case, we talk of "equilibrium creation". The first case can result from mutations, the second one applies to many social or biological cooperation-enhancing mechanisms [12]. We have discussed an interesting case of equilibrium creation, in which the outcome of the replicator equation is changed, although the stationary solutions of the PD and their eigenvalues remain unchanged. This can, for example, occur by adaptive group pressure [15], which introduces an adaptive feedback mechanism and thereby increases the order of non-linearity of the replicator equation. Surprisingly, already a linear dependence of the payoff values P ij on the endogeneous dynamics p(t) of the system is enough to destabilize defection and stabilize cooperation, thereby inverting the outcome of the prisoner's dilemma. | 4,561.2 | 2009-05-22T00:00:00.000 | [
"Economics",
"Sociology"
] |
Enhancing fiber atom interferometer by in-fiber laser cooling
We demonstrate an inertia sensitive atom interferometer optically guided inside a 22-cm-long negative curvature hollow-core photonic crystal fiber with an interferometer time of 20 ms. The result prolongs the previous fiber guided atom interferometer time by three orders of magnitude. The improvement arises from the realization of in-fiber {\Lambda}-enhanced gray molasses and delta-kick cooling to cool atoms from 32 {\mu}K to below 1 {\mu}K in 4 ms. The in-fiber cooling overcomes the inevitable heating during the atom loading process and allows a shallow guiding optical potential to minimize decoherence. Our results permit bringing atoms close to source fields for sensing and could lead to compact inertial quantum sensors with a sub-millimeter resolution.
Atom interferometric sensors use optical pulses along atoms' trajectories to split, deflect and recombine two interferometer arms. While large scale free space interferometers have shown unprecedented sensitivity in measuring gravity, inertial sensing, and test of fundamental physics [1][2][3], the apparatus that is used to house atoms typically has a cross section of tens of centimeters, set by diffraction of the laser beams that are used to interact with atoms. Shrinking the apparatus size could lead to a compact device and allow the atoms to gain proximity to a source of fields of interest to enhance the signal to be detected.
In free space, reducing the laser beam waist comes at the cost of shortening the distance that atoms can effectively interact with the interferometer beams, thus decreasing the interferometer's sensitivity. Alternatively, hollow-core fibers offer a sub-millimeter enclosure that can guide the interferometer beams over diffraction free and configurable paths. However, most free space high sensitivity interferometers require preparation of ultracold atoms at sub-µK temperature in a low noise environment, and strategies to create such conditions for fiber atom interferometers remain to be developed.
For example, to guide a large number of the atom into a fiber and avoid their collision with the fiber wall during interferometer sequence, a deep trapping potential is necessary but generates inevitable heating on atoms during loading and guiding [4]. The large trapping potential also introduces decoherence on atoms' internal and external states through differential ac Stark shift and inhomogeneous dipole potential along the axial direction, limiting the fiber interferometer time to tens of µs [5]. Using only a few recoil energies of the trapping potential, the interferometer time of optically trapped atoms has reached sub-second in free space [6] and 20 s in an optical cavity [7]. It is then necessary to cool atoms directly inside the fiber and lower the trapping potential to minimize the decoherence. *<EMAIL_ADDRESS>Unlike free space laser cooling, the photonic structure of the fiber poses complications on cooling atoms in terms of cooling lasers' geometry and polarization. Although the cooling of atoms along the axial direction of fiber has been demonstrated using Raman sideband cooling [8], the scheme still requires a large trapping potential to confine atoms radially. Here, we implement a two stage cooling of gray molasses and delta-kick cooling, two commonly implemented cooling methods in free-space atom interferometry, to cool the radial temperature of atoms down to 1 µK. We utilize these cold atoms to demonstrate a Mach-Zehnder interferometer with an interferometer time of 20 ms.
The fiber used in this experiment is a Inhibited-Coupling guiding HCPCF (IC-HCPCF) with a tubular cladding made of eight-tube single-ring [9], as shown in Fig. 1. The fiber core has a diameter of 41 µm and the 1/e 2 mode field diameter is 28.7 µm. We mount a piece of 22-cm-long of the fiber vertically inside an ultrahigh vacuum chamber with an angle less than 3 • from the direction of gravity. Atoms are loaded for 1 s from a two-dimensional magneto-optical trap (2D MOT) into a three dimensional magneto-optical trap (3D MOT) positioned 1.5 cm above the fiber. The temperature of the cold atomic ensemble after sub-Doppler cooling is 8 µK. After releasing atoms from the 3D MOT, gravity and a dipole force from a 1064 nm laser with power of 700 mW attract the atoms into the fiber.
The optical depth (OD) of atoms inside the fiber is determined by measuring the transmission T tr of a probe pulse near resonant on F = 3 to F ′ = 4 transition as OD = −lnT tr . Figure 2 shows the resonant OD versus loading time t and the position of atoms relative to the 3D MOT position is plotted as the upper x-axis. OD of one corresponds to approximately 6000 atoms in our experimental parameters. The atomic cloud begins to enter the fiber at 25 ms and fully exits the fiber at 210 ms. The major loss mechanism is due to the imperfect coupling of light into the fiber. Unwanted excitation of higher order modes causes modal beating with the fundamental mode, and hence a spatial modulation of the potential along the axial direction [4]. The increase of the atom loss rate after 150 ms is mainly due to a bend of the fiber at that position where the dipole force is not strong enough to deflect atoms' trajectory.
We characterize the radial temperature of the atoms inside the fiber using the time-of-flight method [5,10]. We switch off the dipole beam and let the atomic cloud to expand ballistically for some time t r . The probe pulse of 3 nW is then switched on for 50 µs for the OD measurement as a function of time t r , as shown in the inset of Fig. 2. OD with different release time t r can be calculated as where OD 0 is the optical density when r = 0, W is the 1/e 2 radius of the guided mode, r 2 = r 2 0 + v 2 t 2 r is the 1/e radius of the atomic cloud at time t r , r 0 is the initial radius, v = 2k B T a /m is the most probable speed of the atoms, T a is the temperature of atoms, m is the mass, and R is the radius of the inner fiber core. The initial radius of the atomic ensemble in the optical trap under equilibrium is related to the temperature as r 0 = W 2 k B T a /2U , where k B is the Boltzmann constant and U is the trapping potential. We fit Eq.(1) to the data and obtain the radial temperature of atoms T a = 32(1) µK after loading. The increase of temperature from sub-Doppler cooling temperature is mainly due to the mismatch of the initial atomic cloud distribution and the dipole profile during the loading process. The high temperature of atoms inside the fiber has limited the interferometer time to tens of µs [5]. Here, we apply gray molasses and delta kick cooling to the atoms. The gray molasses serves as a pre-cooling stage for efficient delta-kick cooling. After 60 ms of loading time, the atom cloud enters the fiber at a distance of 2.6 mm from the fiber tip, and we minimize the ambient magnetic field to tens of mG to implement a Λ-enhanced two-dimensional (2D) gray molasses to cool atoms radially. The 2D gray molasses lasers comprise three laser beams intersecting at 120 • forming a plane perpendicular to the fiber, as shown in Fig. 1. This laser beam configuration allows to excite the in-fiber atoms thanks to the negligible diffraction off the fiber microstructure. This is enabled by single ring cladding and the absence of photonic bandgap at the beam angle of incidence on the HCPCF. Each beam includes two frequencies f 1 (pump) and f 2 (repump) which are +20 MHz detuned from 85 Rb D1 line F = 2 to F ′ = 3 and F = 3 to F ′ = 3 transitions, respectively. When these two frequencies are phase coherent, a dark-state formed by a superposition of F = 2 and F = 3 is created to enhance the cooling efficiency [11,12]. The power ratio of the two frequencies is 35 to ensure all the atoms are accumulated in the F = 2 state after 3.2 ms of gray molasses. Figure 3(a) shows the measured temperature versus two-photon detuning of the molasses beams. The lowest temperature of 10 µK is achieved at two-photon resonance, a characteristic of the gray molasses involving the dark states from the Λ configuration. To study the heating of atoms, we measure the temperature at a loading time of 160 ms. The temperature increases from 10 µK after gray molasses at a loading time of 60 ms to 35 µK.
The moderate increase of the temperature excludes atom loss in loading due to the heating of the dipole laser. To avoid decoherence from any magnetic fields during the interferometer time, atoms are optically pumped into F =2 and m F = 0 state by a π-polarized optical pumping beam on F = 2 to F ′ = 2 D1 transition incident from the side of the fiber and a linear polarized depump beam on F = 3 to F ′ = 3 D2 transition coupled into the HCPCF core, where m F is the Zeeman state. The quantization axis is defined by a 550 mG magnetic field along the fiber axis. After that, the optical dipole trap is switched off, letting atoms to expand for sometime t f . This free expansion creates a correlation between the position and velocity of atoms, where atoms with higher velocity move farther away from the trap center. The trap is then switched on again for a duration t k and atoms are decelerated by the position dependent restoring force from the trap. Ideally, the impulse generated by the restoring force fully stops the momentum of atoms under the condition t f t k = 1/ω 2 , where ω = 2π × 3.7 kHz is the radial trap frequency. The temperature T a that can be achieved is determined by the free expansion time as T 0 /T a = ω 2 t 2 f when ω 2 t 2 f ≫ 1, where T 0 is the initial temperature [13,14]. Figure 3(b) shows the time of flight measurements with different t f . The kicking time t k is determined by the best resultant temperature. At t f = 1 ms, the temperature of 165 nK is optimized at t k = 15 µs. The lowest temperature starts to flatten out after t f = 1 ms. This is mainly due to the anharmonicity of the trap. Moreover, due to the finite size of the guided mode, the long free expansion time for low temperature results in a significant loss of atoms. Figure 3(c) compares the time-of-flight measurements without cooling, with gray molasses, and with both gray molasses and delta kicking cooling. The decrease of the temperature after gray molasses also increases the density of atoms in the fiber by 30 % which can be seen from the increase of the initial OD compared to the results without any in-fiber cooling. At t f = 200 µs and t k = 24 µs, the fitted temperature of the time-offlight measurements is 1.1(1) µK, a factor of two larger than the expected temperature of 0.5 µK. The product t f t k = 4.8 × 10 −9 s 2 is a factor of three larger than 1/ω 2 = 1.9 × 10 −9 s 2 . These deviations from the ideal scenario also suggest the anharmonicity of the trap when atoms are away from the trap center. After the delta kick cooling pulse, we reduce the dipole power to 245 µW (105 nK trapping potential) to weakly trap the atoms and start the interferometer sequence. We implement a Mach-Zehnder interferometer sequence (beamsplitter-mirror-beamsplitter) when atoms are under free-fall. The beamsplitter and mirror optical pulses are formed by a pair of counter-propagating laser beams coupled into the fiber. These two beams drive a twophoton Raman transition on the F = 2, m F = 0 and F = 3, m F = 0 states. The differential ac Stark shift from the dipole beam between these two states is estimated only 60 mHz. To ensure the two frequencies of the Raman beams are phase coherent, a free-running distributed Bragg reflector laser, 20 GHz red-detuned from the F = 2 to F ′ = 3 states D1 line, double-pass through a 1.5 GHz acoustic-optical modulator (AOM) to generate the other frequency of the Raman beam. The zeroorder and first-order beams from the 1.5 GHz AOM pass through two separate 80 MHz AOMs to control its intensity, frequency, and phase. The mirror pulse duration of 1.5 µs corresponds to an effective two-photon Rabi frequency of 2π× 333 kHz, larger than the measured Raman spectrum of 40 kHz of the velocity width of the atoms.
After the first interferometer beamsplitter pulse, the hyperfine spin states of atoms are entangled with their momentum states as |F = 2, m F = 0 |p 0 + e φ1 |F = 3, m F = 0 |p 0 + 2hk ef f , where p 0 is the initial momentum of atoms, k ef f is the effective wavenumber of the two Raman beams in the fiber, and φ 1 is the phase of the two Raman beams. The interferometer time is thus affected by the spin and motional coherence. The spin coherence of stationary atoms in the fiber has been demonstrated over 100 ms [15]. In this particular experiment, we measure the spin coherence of atoms under free-fall using spin echo sequence by a pair of co-propagating Raman beams. The measured 1/e decay time of the spin echo contrast is 15 ms, limited by the inhomogeneity of the magnetic field defining the quantization axis. The relative momentum of the two arms of the interferometer is mainly influenced by the irregularity of the trapping beams during the interferometer sequence [6,16].
The phase at the output of the interferometer is φ = −k ef f g ′ T 2 + φ 1 − 2φ 2 + φ 3 , where g ′ is the projection of gravity along the fiber axis, T is the separation time of the Raman pulses, and φ i=1,2,3 are the phases of the Raman pulses [5]. The population of atoms in the F = 3 state is P = (1 − cos(φ))/2. We vary the phase φ 3 of the last Raman pulse and measure the population of atoms in the F = 3 state to scan the interference fringes. Figure 4(a) shows the contrast of the interference fringes with different T . We observed two different decay trends and fit the data with two exponentially decaying functions. The sharp decay of the contrast in the first 2 ms is due to radial expansion of the atoms in the shallow trap region, which leads to the decrease of the beamsplitter and mirror pulses efficiency. The inset shows the decay of the OD versus T and it matches well with the fast decay trend of the contrast. The second decay trend with fitted decaying constant of 7(3) ms is dominated by the phase fluctuation of the Raman beams from vibrations. To confirm that, we measured the linewidth of the beat note between the two Raman beams to be 150 Hz after passing through the fiber, agreeing well with the contrast decay time we have observed.
The main improvement of our results from [5] is the colder temperature of atoms that allows us to use shallower trapping power to reduce the decoherence from the dipole force. To study the influence of the dipole laser only on the coherence, we perform measurements at short T when the dipole laser off during the interferometer sequence. The dipole laser doesn't introduce additional decoherence at T = 3 ms but retain atoms in the trap, as shown in Fig. 4 (a) and (b). In Fig. 4(c), we study the contrast decay with different dipole power at T = 2 ms. The contrast starts to decrease at 2 mW of the dipole power due to the irregularity of the dipole beam along the fiber axis.
The initial contrast is limited by the temperature of atoms in the axial direction and the efficiency of the interferometer pulses. By adding gray molasses along the third dimension, a temperature of 5 µK can be achieved [17]. The efficiency of the interferometer pulses can be improved using large bandwidth adiabatic rapid passage [18,19]. We expect to improve the interferometer time beyond one hundred ms by installing a vibrational isolation platform to the apparatus and minimizing the higher order modes in coupling the dipole and Raman beams to the fiber.
In summary, we have demonstrated direct laser cooling of atoms inside a hollow-core fiber to below 1 µK using Λenhanced gray molasses and delta kicking cooling. Both cooling schemes are also applicable to cold atoms trapped and guided by other photonic waveguides, such as nanofibers and photonic crystal slabs for quantum optics and many body physics experiments [20]. With colder atoms in the trap, we extend the coherence time of an inertia sensitive atom interferometer optically guided inside a hollow-core fiber to 20 ms with only 245 µW of the optical dipole trap and 10 µW of the Raman beams. The submillimeter package of the interferometer could allow short range force and potential measurements with high spatial resolution and can find applications in constraining the deviation of Newton's law of gravity [21] and testing the quantum nature of gravity [22].
We thank Matt Jaffe, Cris Panda, and Holger Müller for discussion. This work is supported by the Singapore National Research Foundation under Grant No. QEP-P4, and the Singapore Ministry of Education under Grant No. MOE2017-T2-2-066. | 4,141.2 | 2021-12-19T00:00:00.000 | [
"Physics"
] |
Old but Fancy: Curcumin in Ulcerative Colitis—Current Overview
Ulcerative colitis (UC) is one of the inflammatory bowel diseases (IBD). It is a chronic autoimmune inflammation of unclear etiology affecting the colon and rectum, characterized by unpredictable exacerbation and remission phases. Conventional treatment options for UC include mesalamine, glucocorticoids, immunosuppressants, and biologics. The management of UC is challenging, and other therapeutic options are constantly being sought. In recent years more attention is being paid to curcumin, a main active polyphenol found in the turmeric root, which has numerous beneficial effects in the human body, including anti-inflammatory, anticarcinogenic, and antioxidative properties targeting several cellular pathways and making an impact on intestinal microbiota. This review will summarize the current knowledge on the role of curcumin in the UC therapy.
Introduction
Ulcerative colitis (UC) is a chronic inflammatory bowel disease (IBD) that affects the colon [1]. Typical mucosal inflammation involves the rectum but could continuously extend to proximal segments of the large intestine. Like other types of IBD, UC is classified as an autoimmune disease of unclear etiopathology characterized by phases of exacerbation and remission. Its symptoms consist mainly of abdominal pain and bloody diarrhea. The etiology and pathogenesis of UC are multifactorial and still not fully understood; it includes genetic predisposition, immunological dysregulation, intestinal dysbiosis, epithelial barrier dysfunction, and many potential environmental factors, which jointly lead to sustaining chronic inflammation [2].
The incidence of UC is rising around the world causing a global problem, and it is being diagnosed at an earlier age. It is estimated that at least a quarter of patients experience their first symptoms in childhood or adolescence [3,4]. Moreover, extensive colitis occurs in two-thirds of newly-diagnosed pediatrics patients versus in only 20-30% of adult patients [5]. The population-based studies of Ng et al. show that in adults the highest prevalence of UC is in Europe and in North America (505/100,000 in Norway and 286/100,000 in the USA) [4]. In the pediatric population, UC prevalence is estimated at 22/100,000 in most European and North American regions [6].
The main goal of UC management is to induce and maintain remission, defined as the resolution of symptoms with endoscopically confirmed mucosal healing, as long as possible. Long-term maintenance of IBD remission enables children to grow and develop properly, and adults to lead a normal personal and professional life. Pharmacological treatment for UC depends on the disease extent and the degree of its clinical activity and includes 5-aminosalicylic acid drugs, glucocorticoids, and immunomodulating, immunosuppressive, and biological drugs. Unfortunately, monotherapy is not always fully effective and the longterm combination of several drugs may be associated with the occurrence of side effects. Surgical management, with colectomy as the most common procedure, is sometimes the only solution in relapsed and severe disease and is usually being implemented in patients with pancolitis [1,7]. There is no fully effective, universal treatment as the etiology of the disease is complex. However, the therapy should aim to provide satisfactory quality of life for the affected individuals.
These days, more attention is being paid to the necessity of modifying the environmental factors, which includes, apart from the others, dietary aspects, as the so-called Western diet has been linked to the highest prevalence of IBD, including UC [8]. Such a strategy may improve clinical outcomes in UC patients while minimizing the risk of the occurrence of side effects. Recently, alternative therapeutic options that are being explored include specific dietary approaches or usage of nutraceuticals (e.g., polyphenols). A nutraceutical is "a food (or part of a food) that provides medical or health benefits, including the prevention and/or treatment of a disease" [9,10].
More research is focusing on the medicinal value of polyphenols to prevent immunemediated intestinal chronic inflammation. Over the last few years curcumin, a natural polyphenol belonging to the curcuminoid family (compounds derived from Curcuma longa L. [turmeric root]), is of greater interest in the context of managing UC. It seems that curcumin is a promising natural compound due to its widely described multi-beneficial effects on microbiota alteration and antioxidative, antitumor and-the most relevantanti-inflammatory properties. The anti-inflammatory effect is mainly mediated via the regulation of the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) which results in the inhibition of proinflammatory cytokines, such as IL-1, IL-6, and TNF-α expression [11][12][13].
This article provides an overview and clinical perspectives of the role of curcumin usage as adjuvant therapy in ulcerative colitis, with particular attention to its influence on intestinal microbiota.
Curcumin
Curcumin, also known as the 'golden spice of India', has been used for thousands of years as an essential medicinal, herbal ingredient that exhibits anti-inflammatory, antioxidant, or antimicrobial properties, mainly. It is also well-known in Chinese traditional medicine. Nowadays, curcumin, an orange-yellow crystalline powder, is widely used in the food industry mostly as a dye (E100) in foodstuffs and beverages processing. It is also a very popular dietary spice in many cuisines worldwide. It is extracted from the rhizomes of Curcuma longa L. from the ginger family Zingiberaceae. Curcumin comprises 2-5% of the rhizome content. Chemically, curcumin is a diferuloylmethane or 1,7-bis (4-hydroxy-3methoxyphenyl)-1,6-heptadiene-3,5-dione, with the molecular formula C 21 H 20 O 6 [14]. It is the principal curcuminoid, and the most active component of the total turmeric spice. It belongs to substances generally recognized as safe (GRAS), with its safety and tolerance confirmed in human clinical trials [14][15][16]. The Joint Food and Agriculture Organization (FAO)/World Health Organization (WHO) Expert Committee on Food Additives (JECFA) and the European Food Safety Authority (EFSA) allocated an acceptable daily intake (ADI) for curcumin of 0-3 mg/kg body weight [17]. Commercial curcumin consists of three major compounds, which are referred collectively as curcuminoids: curcumin [diferuloylmethane] (82%), demethoxycurcumin (15%, DMC), and bisdemethoxycurcumin (3%, BDMC) [18].
Curcumin is a small molecular weight compound that is lipophilic, thereby nearly insoluble in aqueous physiologic media but is soluble in methanol, dimethylsufoxide, ethanol, and acetone, as well slightly soluble in benzene and ether. It is a very photosensitive compound [14,19]. This yellow-colored polyphenol is a small hydrophobic molecule that can accumulate in cell membranes, which are hydrophobic regions, and perform as an antioxidant, scavenging reactive oxygen species. It is stable in the range of pH between 2.5 and 6.5 and it remains quite stable at the low acidic pH of the stomach [20,21].
After oral administration, curcumin is rapidly metabolized via reduction, sulfation, and glucuronidation in the liver, kidneys, and intestinal mucosa with low absorption of accumulated curcumin from the intestine [22,23]. Phase I of curcumin metabolism consists of reduction of its double bonds in hepatocytes and enterocytes, transforming it to dihydrocurcumin, tetrahydrocurcumin, hexahydrocurcumin, and octahydrocurcumin [24,25].
Phase II consists of conjugation of glucuronide or sulfate to the curcumin and to its hydrogenated metabolites in the intestinal and hepatic cytosol [26]. Major curcumin metabolites in plasma, curcumin glucuronide and sulphate conjugate metabolites, are characterized by low activity [22]. There is a greater curcumin metabolic conjugation and reduction in the human models than in rat models. Therefore, human clinical trials are much more appropriate, and are constantly, highly needed to assess the real curcumin therapeutic potential [26]. The gut microbiome is considered capable of deconjugating phase II metabolites and converting them back to the metabolites of phase I. This process can also lead to the production of, for example, ferulic acid (4-hydroxy-3-methoxy-cinnamic acid) which is a phenolic antioxidant compound that has a high radical scavenger effect for free radicals [27,28]. Furthermore, it was found that commensal Escherichia coli had the highest metabolizing activity among curcumin-converting microorganisms via an enzyme called "NADPH-dependent curcumin/dihydrocurcumin reductase" (CurA). E. coli acts in a two-step reduction process, converting curcumin to dihydrocurcumin, and then to tetrahydrocurcumin [29]. In another study it was reported that Blautia sp. MRG-PMF1 carries out an alternative metabolism of curcumin which is curcumin demethylation. This process led to the production of two metabolites, which were demethylcurcumin and bisdemethylcurcumin [30].
For years, the main limitations of curcumin as a therapeutic option have been its chemical instability and poor systemic bioavailability, with very low or almost undetectable concentrations in blood and extraintestinal tissues, and its rapid metabolism and prompt systemic elimination [31]. Rapid elimination of curcumin from the body results in the excretion of more than 90% of curcumin in the feces. The search for an appropriate delivery method has been a challenge to curcumin use as an effective treatment for specific diseases. This has resulted in the development of specific, promising strategies with some success in increasing blood concentrations and a few examples are mentioned below [32].
The most common way to increase curcumin's poor pharmacokinetic profile is the combination of curcumin with the natural alkaloid of black pepper-piperine (Piper nigrum) that is a strong inhibitor of glucuronidation process, mainly. This formulation resulted in 3-fold increase of curcumin concentrations, as compared to pure curcumin, when 5 mg of piperine was added to 2 g of curcumin [31]. Curcumin dispersed with colloidal nanoparticles (Theracurmin) was highly absorptive-the area under the blood concentration-time curve (AUC) when administered orally 30 mg of Theracurmin was 27-fold higher than that of curcumin powder in healthy volunteers [33]. Another example of curcumin's bioavailability improvement is curcumin in a micellar system. In healthy subjects, the administration of micronized curcumin powder and curcumin incorporated into liquid micelles resulted in a significantly higher concentration of curcumin in plasma and in urine, than after supplementation of native curcumin [34]. There are also some other curcumin combinations like a mixture of turmeric powder and turmeric essential oil, lipid-curcumin formulations, or curcumin mixture with lecithin [24].
Curcumin, Anti-Inflammatory Effect, and Ulcerative Colitis
The significant anti-inflammatory properties of curcumin, being described over the years have attracted a lot of researchers' interest, especially in the context of treating diseases with a chronic inflammation basis. NF-κB is a multi-functional key nuclear transcription factor, involved in the development of inflammatory diseases. It is believed to strongly affect the progression of mucosal inflammation in ulcerative colitis. In many studies it has been shown that curcumin inhibits NF-κB expression by blocking IkappaB (IκB) kinase, that leads to the prevention of cytokine-mediated phosphorylation and the degradation of IκB, which is an NF-κB inhibitor. Hence, the expression of proinflammatory cytokines, such as IL-1, IL-6, IL-8, and TNF-α, is inhibited [35,36]. Furthermore, it was also reported that curcumin inhibited the activity of proinflammatory proteins (e.g., activated protein-1, peroxisome proliferator-activated receptor gamma, transcription activators, the expression of β -catenin) [37].
Curcumin, Intestinal Microbiota, and Ulcerative Colitis
As oral supplementation with curcumin leads to its high concentration in the gastrointestinal tract, studies have slowly focused on its impact on the intestinal microbiota. Via this mechanism, the problem of low systemic curcumin bioavailability probably is not a significant issue within the gastrointestinal tract, and curcumin may have a hypothetical beneficial influence on the gut microbiome [21,38]. A bidirectional interaction exists between curcumin and gut microbiota. Gut microbiota are actively involved in curcumin metabolism, which lead to curcumin biotransformation (demethylation, hydroxylation, demethoxylation) and the production of metabolites. Curcumin supplementation is effective in promoting the growth of beneficial bacterial strains, improving intestinal barrier functions, and counteracting the expression of pro-inflammatory mediators [39].
Only one study in healthy humans assessed microbiota alteration after oral curcumin administration. Peterson et al., in a double-blind, randomized, placebo-controlled pilot study with 30 healthy subjects, assessed changes in the gut microbiota using 16S rDNA sequencing after oral supplementation with turmeric 6000 mg with extract of piperine, curcumin 6000 mg with Bioperine (black pepper extract) tablets, or placebo, at baseline and after 4 and 8 weeks. They found that both turmeric and curcumin in a highly similar manner altered the gut microbiota. Participants who took turmeric supplementation displayed a 7% increase in observed microbial species post-treatment, and curcumin-treated subjects displayed an average increase of 69% in detected bacterial species. The authors indicated that the intestinal microbiota responses to such therapy were highly personalized. Subjects defined as "responders" showed uniform increases in most Clostridium spp., Bacteroides spp., Citrobacter spp., Cronobacter spp., Enterobacter spp., Enterococcus spp., Klebsiella spp., Parabacteroides spp., and Pseudomonas spp., with reduction in the relative abundance of several Blautia spp. and most Ruminococcus spp. [40].
UC patients have been reported to have intestinal dysbiosis with regard to the diversity and composition of intestinal microbiota on different taxonomic levels. Bacteria observed to decrease belong mainly to Firmicutes and Bacteroidetes phyla, which are considered beneficial, and increased bacteria belong to the Enterobacteriaceae family, considered harmful [41][42][43]. Recently, in a pilot study, Zakerska-Banaszak et al. determined specific changes in the gut microbiota profile in Polish UC patients compared to the healthy control group. They reported significantly lower gut microbiome diversity in UC patients with more abundance of Proteobacteria (8.42%), Actinobacteria (6.89%), and Candidate Division TM7 (2.88%) compared to the controls. They also found that Bacteroidetes and Verrucomicrobia were less abundant in UC as compared to the control group (14% and 0% vs. 27.97% and 4.47%, respectively). Additionally important, the researchers observed a decrease in beneficial bacteria, such as Faecalibacterium prausnitzii and Blautia, the butyric acid producers. Butyric acid is one of the crucial short-chain fatty acid (SCFA) for maintenance of intestinal homeostasis and is produced from specific dietary fibers [44]. It is uncertain whether intestinal dysbiosis is a consequence or a cause of chronic gut inflammation. However, studies have pointed out the pivotal role of gut microbiota in the host immune system via gut-associated lymphoid tissue (GALT). Therefore, any microbiome disturbances may be related to many diseases, including ulcerative colitis, and studies are focusing on the effects of intestinal dysbiosis on colonic mucosal barrier integrity, regulation of the host immune system response, and finally, on contributing to the progression of tumorigenesis [45]. Gut microbiota modifications through dietary interventions, targeting bacteria species involved in the disease course, may broaden the therapeutic landscape, providing a chance for personalized therapies in UC patients [46].
According to our knowledge, to date, no human studies have been published to assess the effect of curcumin on the gut microbiome in patients with UC. We reported two studies on the effect of curcumin supplementation on the intestinal microbiota in experimental animal models of colitis.
The first study assessed the effect of a curcumin-supplemented diet (98.05% pure curcumin, free of contaminating curcuminoids: demethoxycurcumin and bisdemethoxy-curcumin) in a mouse model of colitis-associated colorectal cancer, which resulted in increased bacterial richness, prevented the age-related decrease in alpha diversity, increased the relative abundance of Lactobacillales, and decreased Coriobacteriales. The authors concluded that the favorable effect of curcumin on tumorigenesis was linked with the maintenance of a greater colonic microbiota diversity [47].
The second study, conducted by Ohno et al., examined the effects of nanoparticle curcumin (Theracurmin) supplementation on dextran sulfate sodium (DSS)-induced colitis in mice. The curcumin therapy increased the abundance of butyrate-producing bacteria, which led to increased fecal butyrate levels [48]. Butyric acid is known to have a beneficial influence on intestinal homeostasis and energy metabolism. Its anti-inflammatory properties have been linked with enhancing intestinal barrier function and mucosal immunity [49].
Curcumin for Induction or Maintenance of Remission in UC: Data from Systematic Reviews and Meta-Analysis
Since 2020, the interest in curcumin for treating UC has increased noticeably, as evidenced by the increase in published systematic reviews. Searching via PubMed the descriptors "curcumin and ulcerative colitis" yielded sixteen systematic reviews and/or meta-analyses since 2012, where nine of them have been published from 2020 to the present. Four of these nine papers were not summarized in our review: a protocol from Yang et al. [55]. The authors concluded that curcumin with mesalamine in UC patients was effective and safe. They recommended further studies to determine appropriate drug form, dose, duration, and delivery method.
Coelho et al., in their systematic review, included six RCTs with a total of 372 patients (aged 23 to 61) with UC [56]. Curcumin was administered for the maintenance or induction of remission in patients with mild to moderate disease activity. The studies showed good tolerance to curcumin complementary therapy administered with standard treatments. In addition, five of six trials demonstrated good results related to the clinical and/or endoscopic remission/response. Goulart et al. assessed the role of curcumin therapy for the induction of remission in UC in their meta-analysis, which included only four RCTs with 238 enrolled patients with mild-to-moderate UC [57]. The authors concluded that despite the small number of patients enrolled, the supplementation of curcumin had a beneficial impact on the clinical remission of patients with UC.
The latest systematic review about the efficacy and safety of supplemental curcumin therapy in UC, performed by Yin et al. (2022), included six randomized trials with total number of patients n = 385 [58]. The authors reported that supplemental curcumin treatment for UC was safe without any severe side effects. It effectively induced clini-cal remission (RR = 2.10, 95% CI 1.13 to 3.89) but not clinical improvement, endoscopic remission, or endoscopic improvement. The authors emphasized the importance of the optimal method of curcumin administration for a better curative effect and suggested further well-planned studies. In Table 1 we summarize the basic characteristics of the above-described five studies. Shi et al. critically assessed the scientific quality of seven relevant systematic reviews (SRs) and meta-analyses (Mas), including those by Chandan, Zheng, and Goulart (above) [59]. They found the methodological quality of all assessed SRs/Mas to be very low. The quality of evidence for outcomes ranged from moderate to very low. Factors in the low-quality assessments included imprecision, publication bias, and inconsistency. The authors of this overview of SRs state that curcumin may be effective and safe for treatment of UC, but that further research is needed.
Curcumin in the Algorithm of Managing Pediatric UC
According to official recommendations from the European Crohn's and Colitis Organization (ECCO) and the European Society of Paediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN), curcumin may be considered an additional therapy for inducing and maintaining clinical remission in mild to moderate (PUCAI ≤ 60) disease, assessed by the Paediatric Ulcerative Colitis Activity Index (PUCAI) [60]. Curcumin has been shown to be well tolerated in children. It has been established that the safe curcumin dose for remission induction is up to 4 g/day and for maintenance up to 2 g/day. Proposed dosage is shown in Table 2 [61][62][63].
Conclusions and Directions for Future Research
Data is accumulating on curcumin's anti-inflammatory effect in patients with UC, but there is still insufficient evidence to define its effect on the composition and functioning of intestinal microbiota in the course of the disease. For some individuals affected by UC, there seems to be a real need to identify curcumin's role as a supplement in safe, bioavailable, tolerated doses, and to incorporate it into routine clinical practice for better clinical outcomes and improvement of the quality of life of patients. Well-planned, large-scale, randomized controlled trials are needed to assess the benefits of such supplementation, including the impact on composition, diversity, and metabolic functions of intestinal microbiota, in a treatment scheme for the induction and/or in the maintenance of remission in pediatric and adult patients. | 4,410 | 2022-12-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Machine Learning Estimators for Lattice QCD Observables
A novel technique using machine learning (ML) to reduce the computational cost of evaluating lattice quantum chromodynamics (QCD) observables is presented. The ML is trained on a subset of background gauge field configurations, called the labeled set, to predict an observable $O$ from the values of correlated, but less compute-intensive, observables $\mathbf{X}$ calculated on the full sample. By using a second subset, also part of the labeled set, we estimate the bias in the result predicted by the trained ML algorithm. A reduction in the computational cost by about $7\%-38\%$ is demonstrated for two different lattice QCD calculations using the Boosted decision tree regression ML algorithm: (1) prediction of the nucleon three-point correlation functions that yield isovector charges from the two-point correlation functions, and (2) prediction of the phase acquired by the neutron mass when a small Charge-Parity (CP) violating interaction, the quark chromoelectric dipole moment interaction, is added to QCD, again from the two-point correlation functions calculated without CP violation.
A novel technique using machine learning (ML) to reduce the computational cost of evaluating lattice quantum chromodynamics (QCD) observables is presented. The ML is trained on a subset of background gauge field configurations, called the labeled set, to predict an observable O from the values of correlated, but less compute-intensive, observables X calculated on the full sample. By using a second subset, also part of the labeled set, we estimate the bias in the result predicted by the trained ML algorithm. A reduction in the computational cost by about 7%-38% is demonstrated for two different lattice QCD calculations using the Boosted decision tree regression ML algorithm: (1) prediction of the nucleon three-point correlation functions that yield isovector charges from the two-point correlation functions, and (2) prediction of the phase acquired by the neutron mass when a small Charge-Parity (CP) violating interaction, the quark chromoelectric dipole moment interaction, is added to QCD, again from the two-point correlation functions calculated without CP violation.
I. INTRODUCTION
Simulations of lattice QCD provide values of physical observables from correlation functions calculated as averages over gauge field configurations, which are generated using a Markov chain Monte Carlo method using the action as the Boltzmann weight [1,2]. Each measurement is computationally expensive and a standard technique to reduce the cost is to replace the "high precision" average of an observable O by a "low precision" (LP) version of it, O LP [3,4], and then perform bias correction (BC), i.e., O = O LP + O − O LP . The method works because the second term can be estimated with sufficient precision from a smaller number of measurements if the covariance between O and O LP is positive and comparable to the variance of O, which is the case if, for example, the fluctuations in both are controlled by effects common to both. One can replace O LP in the above formulation with any quantity; however, improved results are obtained when a quantity with statistical fluctuations similar to that of O is chosen for O LP . Since most underlying gauge dynamics affect a plethora of observables in a similar way, such quantities surely exist; the trick, however, is to find suitable sets of quantities.
Machine learning algorithms (ML) build predictive models from data. In contrast to conventional curvefitting techniques, ML does not use a "few parameter functional family" of forms for the prediction. Instead, it searches over the space of functions approximated using a general form with a large number of free parameters that require a correspondingly large amount of training data to avoid overfitting. ML has been successful for various applications where such data are available, including exotic particle searches [5] and Higgs → τ τ analyses [6] at the Large Hadron Collider. It has recently been applied to lattice QCD studies [7][8][9]. Here we introduce a general ML method for estimating observables calculated using expensive Markov chain Monte Carlo simulations of lattice QCD that reduce the computational cost.
Consider M samples of independent measurements of a set of observables . .}, i = 1, . . . , M , but the target observable O i is available only on N of these. These N are called the labeled data, and the remaining M − N are called the unlabeled data (U D). Our goal is to build a ML model F that predicts the target observable O i ≈ O P i ≡ F (X i ) by training a ML algorithm on a subset N t < N of the labeled data. The bias-corrected estimate O of O is then obtained as where the second sum is over the N b ≡ N − N t remaining labeled samples that corrects for possible bias. Here O P i depends explicitly on X i and implicitly on N t and all training data {O j , X j }. For fixed ML model F , the sampling variance of O is then given by of the data. In this work, we account for the full error, including the sampling variance of the training and the bias correction datasets, by using a bootstrap procedure [10] that independently selects N labeled and M − N unlabeled items for each bootstrap sample. Two additional remarks regarding bias correction are in order. First, while the bias correction removes the systematic shift in the prediction, it can increase the final error; i.e., the systematic error can get converted to a statistical error. In practice, for the two examples discussed below, the BC does not increase the error significantly. Second, there are two ways of bootstrapping the training and BC samples: (i) first partitioning the labeled data into training and BC sets and bootstrapping these and (ii) bootstrapping over the full labeled set and then partitioning the bootstrap sample. We used the latter approach.
II. EXPERIMENT A: NUCLEON ISOVECTOR CHARGES
For a first example, we demonstrate that this method reduces the computing cost for the isovector (u − d) combination of the axial (A), vector (V), scalar (S), and tensor (T) charges of the nucleon [11,12]. On the lattice, the nucleon charges are extracted from the ratio of the three-point [C A,S,T,V 3pt (τ, t)] to two-point [C 2pt (τ )] correlation functions of the nucleon. In the three-point function, a quark bilinear operatorqΓq is inserted at Euclidean time t between the nucleon source and sink. The desired ground-state result is obtained by removing the excitedstate contamination [13,14] using calculations at multiple source-sink separations, τ , and extrapolating the results to τ → ∞.
The results presented use correlations functions already calculated on the a09m310 ensemble generated by the MILC Collaboration [15,16] at lattice spacing a ≈ 0.089 fm and pion mass M π ≈ 313 MeV [11,12]. The data consist of 144,832 measurements on 2263 gauge configurations. On each configuration, 64 measurements from randomly chosen and widely separated source positions were made. The quark propagators were calculated using the Multigrid inverter [17,18], ported in the CHROMA software suite [19], with a sloppy stopping criterion. The bias introduced by using a sloppy convergence condition is much smaller than the statistical uncertainty for nucleon observables [12,20] and, is therefore, neglected in this study. If necessary, however, it can be easily incorporated by modifying Eq. (1).
The correlation coefficients between the various C 3pt measured at t = τ /2 = 5a and the C 2pt at various values of τ , are shown in Fig. 1. The strongest correlation is with the value of C 2pt near the sink of C 3pt at τ = 10a, and not near the t = 5a of operator insertion. Our intuitive understanding of why the correlation is strongest with C 2pt (10a) is as follows: the spectral decompositions of the two correlation functions are similar except for the insertion of the operator at t = 5a in C 3pt (10a). If the ground state saturates these correlation functions, then the extra term in C 3pt is the matrix element of this operator within the ground state of the proton. This matrix element can be considered as inserting a number (the charge) at t = 5a in C 2pt . If the configuration to configuration fluctuations in the matrix element are small, then one expects a strong correlation between C 2pt (10a) and C 3pt (10a). In addition, there are strong correlations between successive time slices of C 2pt ; thus, one expects the correlation of C 3pt (10a) with C 2pt to be spread over a few time slices about t = 10a as also indicated by the data in Fig. 1. In the more realistic case, in which the nucleon wave function at t = 5a has significant contributions from a tower of excited states, the operator can also cause transitions between these states, and its insertion can no longer be approximated by just one number. One can still expect that operators for which these transition matrix elements are small will have stronger correlations. Based on the observed pattern of excited states, discussed in Ref. [11], we expect the ordering of correlations V > T > A > S, whereas the observed pattern shown in Fig. 1 It is the existence of such correlations that allows the prediction of C 3pt from C 2pt using a boosted decision tree (BDT) regression algorithm available in SCIKIT-LEARN PYTHON ML library [21]. BDT is a ML algorithm that builds an ensemble (tower) of simple decision trees such that each successive decision tree corrects the prediction error of the previous decision tree. The result is a powerful regression algorithm with small number of tuning parameters and a low risk of overfitting. It is also fast: for the data sizes we are considering, it only takes a couple of minutes on a laptop to find an appropriate predictor and evaluate it on the unlabeled samples. The SCIKIT-LEARN implementation of the BDT we used in this study is based on the Classification and (20) 1.0456 (36) TABLE I. Average of C Γ 3pt (10a, 5a)/ C2pt(10a) on the unlabeled dataset. DM is the directly measured result, P1 is the BC prediction defined in the text, with the bias correction factor given in column 4. For the prediction without BC, we used the full 680 labeled configurations for training of the BDT. Note that for this large dataset, the bias correction and the increase in the error in the prediction with BC are negligible.
Regression Trees (CART) algorithm [22] with gradient boosting [23,24]. For the prediction of C 3pt , we use 100 boosting stages of depth-3 trees with learning rate of 0.1 and a subsampling of 0.7. Note that, in this example, the pattern of correlation is such that a linear regression algorithm (such as LASSO [25,26] or RIDGE [27]) gives predictions with reasonable precision. Such a simplification does not occur for the second example described later.
The outline of the calculation is as follows: 1. For each (τ, t), the BDT is trained using the set of C 2pt data (input) and C A,S,T,V 3pt (τ, t) (output). This trained BDT can now take as input the unseen C 2pt data and output the predicted C A,S,T,V 3pt (τ, t). To predict C 3pt at a given (τ, t), one can use the data for C 2pt on all time slices. The essence of a trained BDT is that it gives larger weight to the input C 2pt element with higher correlation with the target observable.
2. The trained BDT is first used on the dataset designated for BC data to predict C A,S,T,V 3pt (τ, t). The bias correction factor is then determined by comparing this prediction with the corresponding directly measured value on the same BC set.
3. The trained BDT is next used on the unlabeled C 2pt dataset to give the predicted C A,S,T,V 3pt (τ, t).
4.
To the average of this predicted C A,S,T,V 3pt (τ, t) set, the bias correction factor is added to give the BC prediction we call P1.
5. The statistical precision can be improved by constructing the weighted average of the BC prediction P1 and the direct measured (DM) results on the labeled dataset. We call this estimate P2. Note that the direct measurements on the labeled data and the predictions on the unlabeled data are not identically distributed because the prediction is not exact, however, the bias-corrected mean is the same. Vector Tensor Therefore, when performing excited-state fits discussed below, we simultaneously fit the two data sets with common fit parameters.
The training and prediction steps treat data from each source position as independent, whereas the biascorrected estimates for each bootstrap sample are obtained using configuration averages in Eq. (1). In this case, the errors are obtained using 500 bootstrap samples.
For the first example, we choose 680 of the 2263 configurations, separated by three configurations in trajectory order, as the labeled data. To determine the number of configurations to use for training, we varied the number between 30 and 180. We found that the variance of the prediction on the unlabeled dataset was the smallest and roughly constant between 60 and 120. We, therefore, picked 60 configurations from the labeled set for training and 620 for bias correction. The 1583 unlabeled configurations were used for prediction. The BDT regression algorithm was trained to predict C A,S,T,V 3pt (τ, t)/N for all τ and t with {C 2pt (τ )/N for τ /a = 0, 1, 2, . . . , 20} as input. The normalization N ≡ C 2pt (τ ) was needed to make numbers of O(1) for numerical stability of the BDT in the SCIKIT-LEARN library.
Data in Table I show that the statistical errors in the bias correction term are large; however, the error in the BC estimate is essentially identical to that in the DM estimates. This implies strong correlations between the two terms, uncorrected and the BC factor. Fig. 2 that the statistical fluctuations in the DM data are larger than the prediction error (PE ≡ C DM 3pt − C Pred 3pt ) of the ML algorithm. The ratios of the standard deviations, σ PE /σ DM , of the PE and DM data are given in Table II. This pattern of smaller variance leads us to believe that, with further optimization, the reduction in computation cost given in Table IV can be increased significantly.
We have carried out two kinds of tests of the efficacy of the method. In Table III, we show data for C Γ 3pt (10a, 5a)/ C 2pt (10a) for different numbers of labeled data, keeping (i) the full 2263 configurations and (ii) 500 configurations. We find that the results are consistent for different numbers, Prediction(N, N t ), of labeled data in both cases. Even when only ten configurations (640 measurements) are used for the training dataset, one gets reasonable estimates. The errors scale roughly as the total number of configurations as can be seen by comparing the upper and lower tables.
In Fig. 3, we compare the prediction P2 of C A,S,T,V 3pt at all τ and t [column (c)] with the DM on labeled and full data shown in columns (a) and (b), respectively. The observed dependence on τ and t is due to contributions from excited states of the nucleon, and the desired groundstate result is given by the limit τ → ∞. This can be obtained by fitting the data at various t and τ using the spectral decomposition of C A,S,T,V 3pt . Fig. 3 shows such a fit assuming only the lowest two states contribute to the spectral decomposition, i.e., the two-state fit described in Refs. [11,12,28]. The lines show the result of this fit for the various τ , and the gray band gives the τ → ∞ value. We find that the prediction P2 in column (c) is consistent with the DM results on the full dataset.
We can further improve the prediction if data for a single value of τ , say C A,S,T,V 3pt (τ /a = 12), is available on the full dataset. Then, in the training stage, we use as input both C 2pt and C A,S,T,V 3pt (τ /a = 12). Having trained the BDT on the labeled data, we now use C 2pt and C A,S,T,V 3pt (τ /a = 12) as input to predict C 3pt (τ /a = 8, 10, 14), which we label VP2. These results are shown in Fig. 3 column (d). Including C A,S,T,V 3pt (τ /a = 12) in the training and the prediction stages increases the computational cost relative to P2 but reduces the errors. For a fixed size of error, VP2 is more efficient than P2 as shown in Table IV. A comparison of the predictions from C 2pt (P2) and from C 2pt and C A,S,T,V 3pt (τ /a = 12) (VP2) vs DM is shown in Table IV for the charges g A,S,T,V obtained after the extrapolation τ → ∞ using the four values of τ . While both estimates, P2 and VP2, are consistent with the DM, VP2 is closer to DM with respect to both the central value and the error. Taking into account the increase in the statistical uncertainty (scaling the cost by the square of the number of measurements) in the predicted results, the ML analysis VP2 provides between a 7% and 26% reduction in the computational cost. The amount of gain is observable dependent.
III. EXPERIMENT B: CP VIOLATING PHASE IN THE NEUTRON STATE
The second example is taken from the calculation of the matrix element of the chromoelectric dipole moment (cEDM) operator, O cEDM ≡iq(σ µν G µν )γ 5 q where G µν is the gluon field strength tensor, within the neutron state. It arises in theories beyond the standard model and violates parity P and time-reversal T symmetries, or equivalently, charge C and CP symmetries in theories invariant under CPT. Since any CP violating (CPV) operator gives a contribution to the neutron electric dipole moment (nEDM), a bound or a nonzero value for nEDM in coming experiments will constrain novel CP violation [29][30][31]. So far only preliminary lattice QCD calculations exist and cost-effectively improving the statistical signal is essential [32][33][34]. We have proposed a Schwinger source method approach (SSM) [35,36] that exploits the fact that the cEDM operator is a quark bilinear. In the SSM, effects of the cEDM interaction are incorporated into the two-and three-point functions by modifying the Dirac clover fermion action: The second equation is for the pseudoscalar operator O γ5 ≡iqγ 5 q that mixes with cEDM due to quantum effects [37]. With CP violation, the Dirac equation for the neutron spinor u becomes (ip µ γ µ + me −i2αγ5 )u = 0; i.e., the neutron spinor acquires a CP-odd phase α (α 5 ), which is expected to be linear in ε (ε 5 ) for small ε (ε 5 ). At leading order, these phases can be obtained from the four two-point functions, C 2pt , C P 2pt , C P,ε 2pt , and C P,ε5 2pt , where the superscript P indicates an additional factor of γ 5 is included in the spin projection [34,38]. 1 The correlator C P,ε 2pt ( C P,ε5 2pt ) is constructed using quark propagators with the O cEDM (O γ5 ) term and is expected to be imaginary and vanish as ε → 0 (ε 5 → 0). In a first step, we show predictions of the BDT regression algorithm for these two using only C 2pt and C P 2pt . For the training and prediction, we use the C 2pt , C P 2pt , C P,ε 2pt and C P,ε5 2pt measured in Refs. [35,36] on 400 MILC highly improved staggered quarks (HISQ) lattices at a = 0.12 fm and M π = 310 MeV (the a12m310 ensemble) with clover fermions. On each configuration, these corre-1 C P 2pt has a zero mean but fluctuations correlated with C P,ε 2pt and C P,ε 5 2pt . It can, therefore, be used for variance reduction [34].
Distribution of Im C P,ε 2pt (10a) (left) and Im C P,ε 5 2pt (10a) (right), averaged over sources in each configuration, are shown in light gold and the prediction error in dark red. The ratio of the standard deviations σPE/σ2pt ≈ 0.18 for OcEDM and 0.4 for Oγ 5 .
lators are constructed using 64 randomly chosen widely separated sources with a sloppy stopping condition, the effects of which are again ignored. Out of the 400 configurations, 120 configurations, separated by three configurations in trajectory order, are chosen as the labeled data, and the remaining 280 configurations are used as the unlabeled data. From the labeled data, 70 randomly chosen configurations are used for training. Only 50 configurations sufficed for bias correction in this case because the ratio of standard deviations of the prediction error vs the DM (σ PE /σ DM ) is small as shown in Fig. 4. Errors are obtained using 200 bootstrap samples.
The BDT regression algorithm is trained to predict the imaginary parts of C P,ε 2pt and C P,ε5 2pt using both the real and imaginary parts of C 2pt and C P 2pt . Note that in the absence of the CPV terms, C P 2pt and the imaginary part of C 2pt average to zero, but, they have nonzero correlations with the target imaginary parts of C P,ε 2pt and C P,ε5 2pt . The BDT regression algorithm with 500 boosting stages of depth-3 trees with learning rate of 0.1 and subsampling of 0.7 gives a good prediction as shown in Fig. 4. Because of nonlinear correlations, the BDT works better than linear regression algorithms in this case; the prediction error is about 50% larger with linear models at t = 1 and decreases to less than 10% by t = 8. Again, for numerical stability, all data fed into the BDT algorithm are normalized by C 2pt (τ ) .
Using the predicted C P,ε 2pt and C P,ε5 2pt on all time slices, we calculate the CPV phases α and α 5 by taking their ratio with C 2pt , because C ε,ε5 2pt differ from C 2pt at O(ε 2 ). Fig. 5 shows the comparison between the CPV phase calculated from DM on the full and labeled data and the ML predicted data. The horizontal lines give the averages over the plateau region where the excited-state contamination is small. Results for α and α 5 are summarized in Table V. To get the improved ML predictions, we combine the prediction on the 280 unlabeled configurations with the DM data on the 120 labeled configurations. These combined data are analyzed following the same bootstrap resampling procedure used in the first example discussed earlier.
The prediction uses 30% of the data for C P,ε 2pt and C P,ε5 2pt and 100% for C P 2pt and C 2pt . This reduces the total number of propagator calculations by 47% compared to the direct measurement. Taking into account the increase of the statistical uncertainty, the computational cost reduction is about 30% as shown in Table V.
IV. CONCLUSION
In conclusion, the proposed ML algorithm used to predict compute-intensive observables from simpler measurements gives a modest computational cost reduction of 7%-38% depending on the observables analyzed here as summarized in Tables IV (VP2) and V (P2). The technique is, however, general provided one can find inexpensive measurements that correlate well with the observable of interest. The computational cost reduction depends on the degree of correlations. We are investigating other ML methods to further improve the quality of the prediction and reduce computational cost.
ACKNOWLEDGMENTS
We thank the MILC Collaboration for providing the 2+1+1-flavor highly improved staggered quarks lattices. | 5,570.6 | 2018-07-16T00:00:00.000 | [
"Physics"
] |
Billing and insurance-related administrative costs in United States’ health care: synthesis of micro-costing evidence
Background The United States’ multiple-payer health care system requires substantial effort and costs for administration, with billing and insurance-related (BIR) activities comprising a large but incompletely characterized proportion. A number of studies have quantified BIR costs for specific health care sectors, using micro-costing techniques. However, variation in the types of payers, providers, and BIR activities across studies complicates estimation of system-wide costs. Using a consistent and comprehensive definition of BIR (including both public and private payers, all providers, and all types of BIR activities), we synthesized and updated available micro-costing evidence in order to estimate total and added BIR costs for the U.S. health care system in 2012. Methods We reviewed BIR micro-costing studies across healthcare sectors. For physician practices, hospitals, and insurers, we estimated the % BIR using existing research and publicly reported data, re-calculated to a standard and comprehensive definition of BIR where necessary. We found no data on % BIR in other health services or supplies settings, so extrapolated from known sectors. We calculated total BIR costs in each sector as the product of 2012 U.S. national health expenditures and the percentage of revenue used for BIR. We estimated “added” BIR costs by comparing total BIR costs in each sector to those observed in existing, simplified financing systems (Canada’s single payer system for providers, and U.S. Medicare for insurers). Due to uncertainty in inputs, we performed sensitivity analyses. Results BIR costs in the U.S. health care system totaled approximately $471 ($330 – $597) billion in 2012. This includes $70 ($54 – $76) billion in physician practices, $74 ($58 – $94) billion in hospitals, an estimated $94 ($47 – $141) billion in settings providing other health services and supplies, $198 ($154 – $233) billion in private insurers, and $35 ($17 – $52) billion in public insurers. Compared to simplified financing, $375 ($254 – $507) billion, or 80%, represents the added BIR costs of the current multi-payer system. Conclusions A simplified financing system in the U.S. could result in cost savings exceeding $350 billion annually, nearly 15% of health care spending. Electronic supplementary material The online version of this article (doi:10.1186/s12913-014-0556-7) contains supplementary material, which is available to authorized users.
Background
In a well-functioning health care system, sound administration is required to ensure efficient operations and quality outcomes. In the United States however, the complex structure of health care financing has led to a large and growing administrative burden [1]. In 1993, administrative personnel accounted for 27% of the health care workforce, a 40% increase over 1968 [2]. Similarly, administrative costs as a percentage of total health care spending more than doubled between 1980 and 2010 [3]. Private insurers? overhead costs have also increased sharply, rising 117 percent between 2001 to 2010 [4].
In the U.S. multi-payer system, insurers? coverage, billing and eligibility requirements often vary greatly, requiring providers to incur added administrative effort and cost [5]. These payment-related activities can be termed ? billing and insurance-related? (BIR) [6]. On the provider side, BIR activities include functions related to interacting with payers, including filing claims, obtaining prior authorizations, and managed care administration. On the payer side, most administrative functions are billing related, with only a small portion spent on carerelated issues [7]. Insurers? profits also contribute to BIR costs.
Several studies have used micro-costing methods ? cost estimates constructed from detailed classification of resource use or expenditures ? to quantify the portion of administrative costs attributable to BIR activities in physician and hospital sectors. Though the specific set of methods used to estimate this cost varies by study, the general approach has been to identify the administrative functions related to BIR activities and use clinician interviews and/or surveys to determine the proportion of work time spent on these activities. In some studies, this process has been supplemented with additional interviews with non-clinical staff [8,9] and observations of work flows [9]. In California in 2001, the BIR component of administrative costs was as high as 61% for physicians (constituting 14% of revenue) and 51% for hospitals (6.6-10.8% of revenue) [7], with predominantly non-BIR activities such as scheduling and medical records management forming the rest of administrative spending. When adjusted to a standard definition of BIR, two other studies attributed 10-13% of revenue in physicians? offices to BIR costs [9,10].
Though studies have documented BIR costs in physician and hospital sectors, the specific analytical methods and components included in the analyses vary, rendering estimates mostly non-comparable. Thus, results cannot be easily combined into a system-wide estimate. To address this problem, we synthesized available micro-costing data on BIR costs. We use an explicit, consistent, and comprehensive definition of BIR to calculate BIR costs in well-studied sectors; estimate the portion of BIR spending in other provider sectors; and present a system-wide estimate of total BIR costs in the U.S. health care system in 2012. We also calculate potential savings from a system with simplified financing, by comparing measured BIR in US health care sectors to lower levels observed with different financing mechanisms. This paper updates preliminary information developed for an Institute of Medicine roundtable on Value and Science-Driven Health Care [6]. It is intended to facilitate policy discussions about reducing the BIR component of administrative costs.
Methods
Overview Drawing on U.S. National Health Expenditures (NHE), existing research and publicly reported data, we estimated total and added BIR costs in the U.S. health care system in 2012. Our estimates included the following sectors: physician practices, hospitals, private insurers, public insurers and ? other health services and supplies.? We assembled micro-costing estimates of total and added BIR costs from various studies [5,8], as well as the percentage of revenue spent on BIR [5,7,9,10]. We reconciled differences in methods and findings by adjusting estimates to include the same BIR activities, payers and cost categories (detailed below). We calculated total BIR costs for each sector as the product of the 2012 U.S. NHE for that sector and the proportion of that sector? s revenue used for BIR. To calculate added BIR costs, we adjusted our total estimates using benchmarks from simplified financing systems (detailed below). To assess the effect of input uncertainty, we performed multiple sensitivity analyses.
Health system sectors
We defined the sectors using categories designated as ? personal health care? in the Centers for Medicare and Medicaid Services? (CMS) accounting of NHEs, and in the case of payers, from the categories designated as ? health insurance? [11] (Additional file 1: Table S1). Examples of categories included under ? other health services and supplies? sector were nursing care, home health care, prescription drugs, and other medical products.
Total BIR
We calculated total BIR costs for each sector as: Total BIR costs ? 2012 NHE Â % revenue for BIR For example, 2012 projected NHE for physician and clinical services was $542.9 billion [11] and estimated average BIR costs for physicians as a percent of their gross revenues was 13% [5,7,9]. Thus, we calculated total BIR costs for physician practices in 2012 as $70.6 billion.
Existing micro-costing estimates of BIR costs in physician practices vary substantially due to differences in analytic methods and BIR functional areas included in the analyses [5,[7][8][9]12]. Rather than select only the estimates that were obtained based on the same BIR definition and analytic method, we undertook a systematic process to make evidence more directly comparable. To do this, we classified BIR into sub-components by type of cost (e.g. contracting, insurance verification, service coding, billing, information technology, overhead) and payer (e.g., private, public). We adjusted each cost study as necessary to include all costs (e.g., overhead) and payers (e.g. public payers), based on data from other cost studies and from the NHE. For example, our estimate of 13% revenue for BIR costs for physician practices is based on a synthesis of three published studies [5,7,9]. It includes BIR costs at multi-specialty, single-specialty primary care, and single-specialty surgical practices. Each study was adjusted for missing information. In the study by Morra and colleagues, the reported estimate of BIR costs at 8.5% of revenue accounted for both public and private payers, but did not include the full range of BIR functional areas in physician practices [5]. Thus, we adjusted the Morra estimate to include information technology, time for insurance verification, a portion of clinician coding of services, and overhead attributed to BIR administration. This translated to a total BIR of approximately 13.3% of revenue, or 12.2% of revenue if clinician coding is omitted. See Additional file 1: Table S2, for details of the synthesis transformations.
We estimated that 8.5% of hospital revenue goes towards BIR activities, based on the mid-point value for hospitals found by Kahn and colleagues [7]. For public insurers, we estimated 3.1% of revenue for BIR, which is the blended mean overhead for Medicare and Medicaid [13]. Since the majority of administrative functions for private insurers are BIR, we assumed the full value of private insurer overhead, including profits, as the percentage of revenue for BIR. We estimated this at 18%, which we calculated as the total enrollment-weighted mean overhead for the 19 largest for-profit, publicly-traded insurers based on market capitalization [14], using 2010 data filed with the Securities Exchange Commission (SEC). Our estimate of private insurer BIR costs includes the administrative costs of private insurers for their administration of Medicare Advantage, Medicare Part D and Medicaid managed care. We added these costs from the 2011 historical NHE to the total estimate of BIR for private insurers.
Recent data on BIR costs for categories within the ? other health services and supplies? sector is absent from the literature, though some earlier data on total administrative costs is available. An analysis of 1999 data from a sample of nursing homes in California and home health agencies across the U.S. found administrative expenditures of approximately 19% and 35% of total expenditures, respectively [15]. We conservatively assumed that 10% of revenue for our other health services and supplies sector categories goes to BIR activities, which is the mean percentage from physician practices and hospitals. We vary these assumptions in sensitivity analyses. Table 1 shows the NHE and percent of revenue attributed to BIR for each sector.
Added BIR
We defined added BIR as the costs of BIR activities that exceed those in systems with simplified BIR requirements. For physicians, hospitals and other providers, we used Canada? s single-payer system for comparison. For private and public insurers, we used U.S. Medicare as a comparator.
We calculated added BIR costs for physicians, hospitals and other health services/supplies as: Added BIR ? Total BIR in U:S: sector  BIR in U:S: sector ? BIR in Canadian sector ?
? =BIR in U:S: sector Morra and colleagues estimated annual BIR costs in physicians? practices at $82,975 per physician in the U.S. versus $22,205 in Ontario, Canada [5], i.e., 73% lower. While data on BIR costs in U.S. hospitals exists [7], we found no comparable data on Canadian hospitals or Canadian or U.S. non-physician health service or supply sectors. We assumed an added proportion of 73% for these sectors. We varied these assumptions in sensitivity analyses.
For private and public insurers, we calculated added BIR costs as: Added insurer BIR ? Total insurer BIR Â Insurer overhead ? U:S: Medicare overhead ?
? =Insurer overhead Table 1 summarizes the proportion considered as the added BIR costs of the U.S. multi-payer system for each health care sector.
Sensitivity analyses
Excluding clinician coding of services BIR obligations likely require additional coding by clinicians, beyond that needed for clinical documentation, consuming up to 2.3% of physician revenue [9]. In our base case estimate of BIR costs in physician practices, we included 50% of the cost of coding. If we exclude clinician coding of services as a BIR function, we calculate a revised estimate of 12% for the percentage of physician revenue spent on BIR, based on the average of three studies [5,7,9].
Canadian medicare
In the base case analysis, we use U.S. Medicare as a comparison system against which to estimate the added BIR costs of private and public insurers. Due to differing estimates of U.S. Medicare overhead (i.e. excluding versus including private insurer administration of medical plans) [16], we explored the effect of using Canada? s Medicare as an alternative comparator to calculate the excess BIR costs of U.S. insurers. We used an overhead estimate for Canada? s Medicare of 1.8% (2011 forecast) [17] (Additional file 1: Table S3).
Total BIR
Due to uncertainty in some sector-specific inputs, we varied the percentage of revenue for BIR for each sector to obtain a plausible range of total BIR costs. Where available, we used lower and upper bound estimates from the literature; where unavailable, we varied the estimates by up to ten percentage points, using wider variations when data was least certain, e.g., for the ? other health services and supplies? sector. Varying the estimates in tandem, we obtained upper and lower bound estimates of total BIR costs across the U.S. health care system in 2012 (Additional file 1: Table S4).
Added BIR
For the ? other health service and supplies? sector, we varied our baseline estimate of 27% (Canadian: U.S. BIR costs) by 5 percentage points in either direction. For the hospital sector, we calculated a new ratio of 8.1% (Canadian: U.S. BIR costs) based on published data on total (not just BIR) hospital administrative spending in the U.S. and Canada [15] (text on added costs, Additional file 1).
Ethics statement
This research did not involve human subjects and thus did not require ethics committee review.
Total BIR
Our base case calculation is that BIR costs in the U.S. totaled $471 billion in 2012. Physicians? practices spent $70 billion on BIR activities, hospitals spent $74 billion, and the ? other health service and supplies? sector spent an estimated $94 billion ( Figure 1). Private insurers contributed the largest share of BIR costs, $198 billion; public insurers contributed $35 billion.
Added BIR
About $375 billion (80%) of annual BIR costs constitutes additional spending compared to a simplified financing system. This 80% reflects 73% savings among provider sectors [5] and 93% savings in the private insurance sector. When compared to Canada? s single payer system, added BIR costs in U.S. physicians? practices totaled $49 billion annually (Figure 1; Additional file 1: Table S3). In U.S. hospitals and ? other health service and supplies? sectors, added BIR costs were $54 billion and $69 billion, respectively. When compared to BIR costs in U.S. Medicare, additional annual spending on BIR for private and public insurers totaled $185 billion and $18 billion, respectively ( Figure 1). Figure 2 shows each health care sector? s share of total added BIR costs. Private insurers contributed much of added BIR spending at 49%, though providers collectively represented nearly half of the total.
Sensitivity analysis Excluding clinician coding
If clinician coding of services is excluded as a BIR activity, total and added BIR costs in physicians? practices are reduced minimally to $65 billion and $48 billion, respectively.
Canadian medicare
Using Canada? s Medicare overhead (1.8%) [17] instead of U.S. Medicare? s (1.5%) [13] for comparison reduces added BIR costs for private and public insurers to $182 billion and $15 billion, respectively, yielding a revised estimate of $369 billion in overall added BIR costs (Additional file 1: Table S3).
Total BIR
Varying in tandem each of the sector-specific estimates of the percentage of revenue spent for BIR yields a plausible range for total 2012 BIR costs of $330 billion -$597 billion (Additional file 1: Table S4).
Added BIR
Varying the BIR cost ratios for the hospital and nonphysician health service and supplies sectors as described above, and using the lower and upper bound estimates of total BIR in each sector, we obtained a plausible range for overall added BIR costs in the U.S. of $254 -$507 billion in 2012 (see Table 2).
Discussion
While published data exist on BIR costs for certain health care sectors, these isolated estimates do not provide the comprehensive portrayal needed to understand the overall costs of BIR in the U.S. Our analyses, which synthesize available micro-costing data on BIR costs using a consistent definition of BIR and extrapolate data to sectors lacking estimates, present the first system-wide estimate of total BIR costs across the U.S. health care system. Synthesizing data from existing studies, our analyses indicate that BIR costs totaled $471 billion annually in the U.S in 2012; 80% of this represents additional costs when compared to a simplified financing system. If BIR costs were pared to that of benchmark systems, systemwide savings would exceed $350 billion per year.
Total BIR costs currently represent about 18% of U.S. health care expenditures (excluding government public health activities). Non-BIR administrative activities represent an additional 9.4% [7], leaving less than 73% of spending for clinical care (Figure 3; details of estimation of non-BIR administrative costs in Additional file 1). Added BIR costs of $375 billion translate to 14.7% of U.S. health care expenditures in 2012, or 2.4% of GDP [19].
Our findings update and expand on previous estimates. Woolhandler et al. estimated total administrative spending in the U.S. health care system in 1999 at $294.3 billion, with added spending of $209 billion when compared to Canada [15]. Adjusting their estimates to 2012 health spending yields estimated added costs of approximately $448 billion, an estimate that falls within the upper bound of our sensitivity analysis. The earlier study assessed total administrative costs, not just BIR spending, and hence is not directly comparable to this study. However, a simplified payment system that blunts entrepreneurial incentives (as in Canada) might also reduce non-BIR administrative costs for such items as marketing and internal cost accounting.
Our estimates of BIR costs in physicians? practices are higher than previous studies, due to the more complete set of BIR activities, payers, and costs quantified in our analysis. Morra et al. estimated total BIR costs per U.S. physician of $82,975, translating to total and added BIR costs of $38 billion and $28 billion, respectively [5]. However, their analysis involved just a subset of BIR activities, as detailed above. Similarly, Heffernan et al? s [8,12] analysis, which was limited to private payers, estimated added BIR costs of $26 billion. After adjustment to encompass the entire scope of BIR activities, payers, and overhead costs, these earlier estimates are consistent with ours (Methods and Additional file 1: Table S2). Remaining differences are small ? less than 5% of total BIR costs ? and most likely explained by nuances in the questions used to obtain BIR costs. We present synthesis mid-point estimates of this small variation for all relevant analyses (Additional file 1: Table S2). Several caveats apply to our estimates. First, BIR estimates in the published literature are most robust for physicians? practices, with limited information available for hospitals and almost no data for categories within the sector defined as ? other health service and supplies.? Hence, we explored the effect of uncertainty in our estimates in sensitivity analyses.
Second, our analyses assume that BIR can be distinguished from other administrative functions. This seems a fair assumption, given that consistent findings were obtained for similar activities using varied methods. Qualitative claims by physicians of the burden of BIR lend further support to this assumption [10].
Finally, our estimate of total and added BIR is likely conservative on three accounts. First, we assume no BIR spending outside of the direct health sector, e.g., by employers or patients. Since employment-based coverage is pervasive in the U.S., documentation of BIR costs of employers might well augment the estimates presented here. Second, for providers, we assume that costs such as public relations and marketing are incurred for non-BIR reasons. This assumption might underestimate added BIR costs, since such non-BIR administrative costs might also be lower in a simplified financing system. Finally, new evidence comparing total hospital administrative costs in the U.S. to Canada and other OECD countries suggests that added BIR for hospitals may be substantially higher than our base case estimate [20].
Since these added costs are a function of the structure of the U.S. multi-payer system, some might characterize these costs as excess, in that that they provide little to no added value to the health care system. If BIR functions produce secondary benefits, such as enhanced quality or utilization management, the high BIR costs in the U.S. might be justified. Some research suggests, for example, that prior authorization can reduce over-utilization of brand-name medications without reducing patient satisfaction [21]. It is also possible that BIR functions provide benefits that have not yet been quantified. Nonetheless, any unmeasured benefit would have to be large to offset added BIR costs. Moreover, at least one study has found that higher administrative costs are associated with lower quality [22]. Hence, reducing BIR costs by adopting a simplified financing system would provide substantial recurring savings and produce an unequivocal benefit from a societal perspective. It is worth noting also that a simplified financing system does not preclude utilization controls, and that such controls might be employed in single payer systems while maintaining lower BIR costs.
Eliminating added BIR costs of $375 billion per year (14.7% of US health care spending) would provide resources to extend and improve insurance coverage, within current expenditure levels. Since uninsured individuals have utilization of about 50% of insured individuals [23], the current 15% uninsured could be covered with roughly half of the $375 billion. Remaining savings could be applied to improved coverage for those already insured. Full financial analyses of single payer insurance reform formalize and extend these analyses [24].
Unfortunately, recent reforms incorporated in the Affordable Care Act (ACA) and the American Recovery and Reinvestment Act (ARRA) are unlikely to substantially reduce BIR costs and administrative burden. Data on the BIR portion of administrative costs is not yet available in the published literature. Using the BIR cost percentages identified in this analysis as a starting point, we projected BIR costs under the ACA in 2014 and 2018. Our projections were based on estimated increases in the insured population in each health sector (i.e., 7 million more people covered by private insurance and 8 million more by Medicaid in 2014; 13 and 12 million more, respectively, in 2018) [25]. Assuming parallel increases in healthcare utilization, stable administrative complexity, and an initial cost of $5.8 million to operate the exchanges [26,27], we estimate that implementing the ACA will increase system-wide BIR costs by 5 − 7% ($24 − $34 billion) in 2014 and 9 − 11% ($45 − $55 billion) in 2018 (in 2014 USD). Moreover, greater use of deductibles under the ACA will likely further increase administrative costs, since each claim will require processing and value adjustment before determining whether the deductible has been met. Thus, the new system will incur some new BIR costs for both the insured and uninsured portion of care. Empirical evidence from similar reform in Massachusetts is not encouraging: exchanges added 4% to health plan costs [28], and the reform sharply increased administrative staffing compared with other states [29].
While it was hoped that the ARRA? s incentives for adoption of health information technology (HIT) would reduce costs, partly by streamlining billing and administration [30], savings have not materialized [31,32]. Indeed, it appears that HIT will impose hefty implementation and training costs [33], and may require ongoing expenditures for IT upgrades and maintenance [4]. Moreover, the ACA? s emphasis on financial incentives such as payfor-performance may well increase administrative complexity, and hence costs [34].
A recent estimate suggests that simplifying administrative activities within the existing multi-payer system by implementing a range of standardization, automation and enrollment stabilization reforms could save $40 billion annually [35]. While these savings are significant, we estimate that the annual administrative savings under a single-payer system would be nearly nine-fold higher. Though some argue that shifting to a single payer system could propagate unintended financial hazards (i.e., overutilization) and inefficiencies, as discussed previously, utilization controls can be employed in simplified financing systems while also keeping BIR costs down. Moreover, evidence from the U.S. Medicare program and the systems of several other countries [1] demonstrates that large, unified payers can achieve significantly greater efficiencies than multi-payer systems. Unified payment schemes enjoy economies of scale, sharply reduce the burdens of claims processing, and obviate the need for marketing, advertising and underwriting expenses.
Conclusions
While the estimates presented here should continue to be refined through additional sector-specific research on BIR costs, the cost burden of BIR activities in the existing U.S. multi-payer health care system is clear. Implementation of a simplified financing system offers the potential for substantial administrative savings, on the order of $375 billion annually, which could cover all of the uninsured [36] and upgrade coverage for the tens of millions who are under-insured. Further research into the costs of BIR activities to employers and in areas such as home health care, nursing home care, and prescription drugs would augment the findings from this analysis. Data on BIR costs since implementation of the ACA is also needed to further illuminate the administrative effects of recent health reforms and provide additional tangible information for policy decision-making. | 5,856.2 | 2014-11-13T00:00:00.000 | [
"Economics",
"Medicine"
] |
Multiobjective Optimization Model of Quality Education Reform in Research and Learning Activities
In order to solve the problem of quality education reform in research activities, a multiobjective optimization model for the optimal performance distribution of college teachers is proposed. Wemainly use the multiobjective optimization method to study the allocation of optimal performance. On the basis of considering individual dierences, a multiobjective optimization model is established with the goal of maximizing the total satisfaction of all assessment objects and balancing the satisfaction as much as possible by introducing the score conversion function and the satisfaction function. e results show that the variation trends of the weak Pareto fronts at ve dierent condence levels are basically the same; that is, dierent condence levels have little eect on the results. Conclusion. e use of a multiobjective optimization model can improve the optimal performance distribution of college teachers, so as to eectively promote the reform of quality education in research activities.
Introduction
Quality education mainly refers to a new educational model that aims to improve students' quality in all aspects and promote students' personal development to adapt to social development [1]. With the continuous reform of quality education, colleges and universities, as the main position to cultivate high-quality talents, should rmly grasp the pulse of the development of the times. It is particularly urgent to reform and practice characteristic quality education. As a unique group with vigor and vitality, college students' curiosity, independence, and thirst for knowledge all have distinctive personal characteristics and characteristics of the times. We should carry out characteristic quality education in Colleges and universities, strengthen education guidance, send more compound talents to the society, promote the allround development of the social economy, and make positive contributions to the prosperity of the country and the revitalization of the nation. erefore, we should carry out the reform and practice of characteristic quality education, gradually establish and improve a scienti c and reasonable characteristic quality education system for college students, and continuously cultivate diversi ed and all-round talents with a sense of responsibility on the premise of fully respecting the wishes and actual conditions of college students, so as to respond to the needs of national development for talents.
As we all know, the research on performance distribution has very important theoretical signi cance and application value for the e cient operation of government, enterprises, schools, hospitals, and other institutions [2]. However, at present, few people use the multiobjective optimization model to study the performance distribution. At the same time, we also note that the optimal performance distribution scheme not only depends on the e ective aggregation of index scores at all levels but also is related to many factors such as the individual di erences of the appraisees and the requirements of basic workload [3]. Inspired by the research work, this paper mainly studies the application of a multiobjective optimization model in the optimal performance allocation problem. Based on the individual di erences of the appraisees in the performance distribution, all appraisees are classified. Taking the control parameters reflecting the incentive degree and the basic workload of all appraisees as variables, the score conversion function and the satisfaction function describing the satisfaction of the appraisees are introduced to establish a multiobjective optimization model with the goal of maximizing the overall satisfaction of all appraisees and balancing the satisfaction as much as possible [4]. Furthermore, the existence of weak efficient solutions of the multiobjective optimization model is proved by using the constraint scalarization method. In its application, this paper takes the scientific research work and mathematical work completed by the examinee as the evaluation index, uses the multiobjective optimization model to study the optimal performance allocation of teachers in a university, and uses a genetic algorithm to carry out numerical experiments [5] as shown in Figure 1.
Literature Review
At present, there are two main changes in the construction of general education courses in Chinese colleges and Universities: first, general education elective courses have grown from scratch, from less to more, and have developed from focusing on increasing "quantity" to improving "quality", starting to build the core courses of general education. After more than 20 years of development, colleges and universities have opened as many as 200 or 300 general education elective courses, most of which cover the three knowledge fields of humanities, social sciences, and natural sciences. Most colleges and universities adopt the method of distributed electives; that is, the general elective courses are divided into several modules according to the nature of the discipline, requiring students to take certain credits from different fields. e general education elective course is an independent course set up by Chinese colleges and universities to highlight the idea and characteristics of quality education. At the initial stage of its establishment, due to the lack of deep understanding and effective management, general education elective courses generally had the problems of "miscellaneous content, disordered structure, poor quality, and low status," which made it difficult to effectively play the role of quality education. In recent years, some universities have strengthened the top-level design and policy support of general elective courses and started to focus on building a number of "general core courses" on the basis of the original general elective courses. Some colleges and universities have also set up general education curriculum committees to hire scholars from multiple disciplines to conduct overall design and quality audits of general education curriculum. is series of measures have made general education elective courses develop towards "systematization, standardization, high-quality products, and core," and the quality and status of the curriculum are improving. is puts forward higher requirements for the optimal performance of university teachers.
Even so, in universities with a large number of professional colleges and departments, quality education or general education institutions without discipline and specialty inevitably have to be in an awkward position of "powerless and powerless". erefore, in order to fundamentally improve the status of general education, we also need deeper organizational system reform. In fact, the Department of a University is not only a first-class administrative institution but also an institution that connects a certain type of discipline and specialty. at is to say, the organizational form of the Department holds the personnel of different disciplines together, further clarifies the boundaries between different disciplines, and forms discipline barriers in the form of administrative institutions. Accordingly, the curriculum and teaching system are also organized with the discipline as the center, thus realizing the goal of professional talent training. On the surface, the implementation of quality education needs to reform the curriculum. In fact, it is a comprehensive reform of the University. It is a reconsideration of the educational purpose of the University, a repositioning of the teaching objective of undergraduate education, and a reconstruction of the talent training mode and will eventually involve the reform of the management system and even the organizational system of the University.
Aiming at this research problem, Yao, W. W. believes that colleges and universities, as the main position for talent training, mean to promote the all-round development of students in all aspects through the training of college students [6]. We should fully realize the importance of the reform of characteristic quality education and put the characteristic quality education through the whole process of higher education, so that the cultivation of quality talents in Colleges and universities is no longer superficial. Nyamutata believes that, on the one hand, it can improve students' ideological construction, cultivate students' awareness of self-study, competition and innovation, and enable students to learn and grow under the correct guidance of education; on the other hand, the characteristic quality education activities can also cultivate students' innovation ability and practical ability, stimulate students' learning potential to the greatest extent, link the cultivation of talents with the needs of the market, enterprises, and society, and promote students' social employment and life development [7]. Diab believes that performance evaluation is widely used in many aspects, such as the comprehensive performance assessment of enterprise employees, the work quality assessment of university teachers or administrative personnel, the comprehensive performance assessment of bank employees, and the ecological environment assessment [8].
On the basis of the current research, this paper proposes a multiobjective optimization model for the optimal performance allocation of university teachers, specifically taking the scientific research and mathematical work completed by the assessment object as the assessment index, using the multiobjective optimization model to study the optimal performance allocation of a university teacher, and using the genetic algorithm to carry out numerical experiments [9].
Construction of Score Conversion Function Model.
As the performance distribution needs to reflect a certain degree of incentives to the appraisees and the quantitative standards of different types of indicator scores may be different, it is necessary to introduce the score conversion function to convert the initial score. Note that the score conversion function should include control parameters that reflect the incentive degree and the basic workload of all appraisees and should meet the basic requirements that the higher the initial score, the higher the conversion score [10]. First, the complete order relation information is converted into a score matrix and normalized; based on the normalized score matrix, a comprehensive score matrix and a different degree matrix are constructed, and then, a matching degree matrix is constructed. Furthermore, a single-target matching model is constructed based on the matching degree matrix, and the matching scheme is determined through the model solution. erefore, the score conversion function in the following form is adopted in this paper, as shown in the following formulas (1) and (2): x j represents the basic workload of indicator j, and y ij represents the conversion score of indicator j of the i-th appraisee. Obviously, different values of λ j and n will reflect different degrees of excitation, and the values of λ j and n may also be different for different practical problems. Although there are many score conversion functions, the score conversion function selected in this paper satisfies y ″ > 0; that is, the higher the actual score, the greater the increase of the conversion score [11]. erefore, if the appraisee wants to get a higher conversion score, the actual score needs to be raised to a higher level, which reflects the incentive to the appraisee. In order to simplify the problem, this paper takes n � 2. erefore, how to determine the optimal λ � (λ 1 , λ 2 ) and the basic workload of assessment indicators x � (x 1 , x 2 ) is the focus of this paper.
Range Estimation of Displacement.
If λ 1 � 0 or λ 2 � 0, the conversion score of the first or second appraisal indicator of all appraisees is 0, which is inconsistent with the actual situation. erefore, this paper assumes that λ j ⩾l j , where l j is a sufficiently small positive number. Similarly, if λ 1 ⟶ +∞or λ 2 ⟶ + ∞, this is also inconsistent with the actual situation. erefore, this paper assumes that λ j ⩽u j , where u j < + ∞, j � 1, 2. Obviously, the basic workload x j ⩾0, j � 1, 2. However, the basic workload should be as large as possible.
For example, when x j � max a 1j , a 2j , · · · , a pj , j � 1, 2, the conversion score of the j indicator of all appraisees is 0. is is obviously not in line with the actual situation. erefore, it is necessary to estimate the upper bound of the basic workload [12]. It is assumed that the scores of each category of indicators of different types of appraisees are not all the same and follow the normal distribution [13]. Since the basic workload corresponding to various assessment indicators should not deviate too much from the average level of all assessment objects, this paper takes the upper bound of the expected value of normal distribution as the upper bound of the basic workload of assessment indicators. Let X 1j and X 2j , respectively, represent the sample mean value of the j-th assessment indicator score of type I and type II appraisees, and then, formula (3) is as follows: X 11 � a 11 + a 21 + · · · + a s1 s , X 12 � a 12 + a 22 + · · · + a s2 s , X 21 � a s+1,1 + a s+2,1 + · · · + a p1 p − s , X 22 � a s+1,2 + a s+2,2 + · · · + a p2 p − s .
(3)
Let S 1j represent the sample standard deviation of the score value of the type j appraisal indicator of the type I appraisee, and S 2j represent the sample standard deviation of the score value of the type j appraisal indicator of the type II appraisee, and then, the corresponding sample variances are shown in the following formula: erefore, when the confidence level is 1 − α, the two unilateral upper confidence limits of the basic workload x 1 of the first type of assessment indicator are ), respectively. In order to allow as many assessment objects as possible to participate in the performance distribution, this paper takes min X 11 + t α (s− 1)(S 11 )} as the upper bound of the basic workload x 1 . Similarly, it can be obtained that the upper bound of the basic workload x 2 of type 2 assessment indicator is shown in the following equation:
Satisfaction Function.
is section mainly proposes the satisfaction function to describe the satisfaction of the appraisee with the distribution scheme and then proves the continuity of the satisfaction function on the estimation interval of the variable [14]. Note Y i � 2 j�1 y ij , Y ′ � p i�1 y i1 , Y ″ � p i�1 y i2 , i � 1, 2, · · · , p for the i-th appraisee, it is obvious that the most favorable distribution proportion and the most unfavorable distribution proportion can be expressed as the following formula, respectively: Since the satisfaction of the appraisees should increase with the increase of the distribution proportion Y i � (Y i /Y ′ + Y ″ ), and the greater the distribution proportion, the more difficult it will be to improve the satisfaction of the appraisees, this paper proposes the following satisfaction function (7): Obviously, when Y i � U i , the satisfaction of the i-th appraisee is 1; that is, the allocation proportion is the most favorable for the i-th appraisee; when Y i � L i , the satisfaction of the i-th appraisee is 0; that is, the allocation proportion is the most unfavorable to the i-th appraisee [15]. According to the satisfaction function constructed in this paper, if the performance allocation proportion of the i-th appraisee is closer to U i , that is, the higher the allocation proportion is, the higher the value of the satisfaction function is. On the contrary, the value of the satisfaction function will be lower. On the other hand, as the proportion of appraisees increases, appraisees will have higher expectations of themselves. erefore, it will become more difficult to further improve the satisfaction of appraisees. e satisfaction function proposed in this paper satisfies f ″ < 0, which just expresses this meaning. erefore, the satisfaction function proposed in this paper reflects the satisfaction of the appraisees with the performance distribution scheme to a certain extent. e continuity of the satisfaction function on the estimation interval of the variable is proved as shown in the following equation: m a 1j , a 2j , · · · , a sj , m a s+1,j , a s+2,j , · · · , a pj , M j � m a 1j , a 2j , · · · , a sj , m a s+1,j , a s+2,j , · · · , a pj . α⩾0.05, s⩾6, p − s⩾6, and the index scores of various appraisees meet the following formula: Then, the following conclusion is true: (i) At least one appraisee's type 1 appraisal indicator score is greater than the upper bound of the basic workload x 1 (ii) ere is at least one appraisee whose score of category 2 appraisal index is greater than the upper limit of basic workload x 2 Only Case (I) and similar Case (II) can be proved. If , it can be known from the condition that (a i1 − X 11 ) 2 ⩽m (m 1 − X 11 ) 2 , (M 1 − X 11 ) 2 � (M 1 − X 11 ) 2 (1⩽i⩽s) and then:⩽(t α (s − 1)) 2 ((a 11 − X 11 ) 2 + · · · + (a s1 − X 11 ) 2 / (M 1 − X 11 ) 2 )⩽(t α (s − 1) ) 2 ((M 1 − X 11 ) 2 + · · · + (M 1 − X 11 ) 2 /(M 1 − X 11 ) 2 ) � s(t α (s − 1)) 2 , so (s − 1)⩽ (t α (s − 1)) 2 . When α⩾0.05 and s⩾6, (s − 1) > (t α (s − 1)) 2 , which leads to contradiction. If s ( + 1⩽i⩽p can be known from the conditions.
Results and Discussion
is section mainly uses the multiobjective optimization model established above to study the optimal performance allocation of higher school teachers. Taking a secondary unit of a university as an example, 86 teachers were assessed from two aspects of teaching and scientific research. Table 1 shows the scores of teaching assessment indicators and scientific research assessment indicators of 86 teachers [16,17].
According to the data in Table 1, the assessment objects are divided into two types: teaching type and scientific research type, in which the teaching type teacher s � 48. Obviously, the data in Table 1 Where ξ ∈ [0.01, 100], then(MOP) 2 can turn into(MOP) 3 min ξ, Obviously, (MOP) 2 and (MOP 3 are equivalent. e genetic algorithm is used to program (MOP 3 , and its weak Pareto front is obtained, as shown in Figure 2. Because the genetic algorithm is used to calculate (MOP) 3 in this paper, the weak Pareto solution obtained is some discrete points. e points on the left part of the Figure are dense and the points on the right part are sparse [18,19]. In this paper, five groups of weak Pareto efficient solutions (see Table 2) are randomly selected, and the corresponding distribution proportion and satisfaction curve are shown in Figure 3 and Figure 4.
It can be seen from Figures 3 and4 that although different Pareto solutions correspond to the performance distribution proportion of the appraisees and the satisfaction of the appraisees, there is only a small gap [20]. erefore, when making decisions, the decision-maker only needs to select a group of weak Pareto solutions arbitrarily to obtain a performance allocation scheme based on the highest possible total satisfaction of the appraisees and the most balanced satisfaction of the appraisees [21,22]. e weak Pareto fronts at different confidence levels are given below, as shown in Figure 5.
It can be seen from Figure 5 that the change trend of weak Pareto front under five different confidence levels is basically the same; that is, different confidence levels have little impact on the results [23][24][25]. Aiming at the aggregation problem of evaluation scores in performance evaluation, this paper establishes a new multiobjective optimization model and uses genetic algorithm to solve the model. e numerical results show that the method used in this paper has obvious advantages. is paper mainly uses the multiobjective optimization method to study the distribution of optimal performance. On the basis of considering individual differences, by introducing the score conversion function and the satisfaction function, the maximization and satisfaction of the total satisfaction of all assessment objects are established. A multiobjective optimization model with the goal of balancing as much as possible proves the existence of weak effective solutions of the model and studies its application in teacher performance distribution in high schools.
Conclusion
is paper presents a multiobjective optimization model for the optimal performance allocation of university teachers. Taking the scientific research and mathematical work completed by the examinees as the assessment indicators, this paper studies the optimal performance allocation of university teachers by using the multiobjective optimization model, and carries out numerical experiments by using genetic algorithm.
is paper mainly uses multiobjective optimization method to study the allocation of optimal performance. On the basis of considering individual differences, by introducing score transformation function and satisfaction function, a multiobjective optimization model is established with the goal of maximizing the total satisfaction of all assessment objects and balancing the satisfaction as much as possible. en, the existence of a weak efficient solution of the model is proved, and its application in the performance distribution of university teachers is studied.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest. | 4,965.8 | 2022-09-07T00:00:00.000 | [
"Education",
"Economics"
] |
Matching complexes of small grids
The matching complex $M(G)$ of a simple graph $G$ is the simplicial complex consisting of the matchings on $G$. It is known that the matching complex $M(G)$ is isomorphic to the independence complex of the line graph $L(G)$. In this paper, we study the homotopy type of the matching complex of $(n \times 2)$-grid graph $\Gamma_n$. Braun and Hough introduced a family of graphs $\Delta^m_n$, which is a generalization of $L(\Gamma_n)$. In this paper, we show that the independence complex of $\Delta^m_n$ is a wedge of spheres. This gives an answer to a problem suggested by Braun and Hough.
Introduction
A matching on a simple graph G = (V (G), E(G)) is a subgraph of G whose matximal degree is at most 1. A matching is identified with its edge set. The matching complex M (G) of G is the simplicial complex whose simplices are the matchings on G. In general, it is very difficult to determine the homotopy types of matching complexes (see [3], [8], and [10]). We refer to [7] for a concrete introduction to this subject.
In particular, we write Γ n instead of Γ(n, 2).
Jonsson first studied the matching complex of grid graphs in his unpublished paper [6]. Recently, Braun and Hough [4] investigated the matching complex of Γ n , and they use discrete Morse theory to derive some homological properties of M (Γ n ). In fact, they studied more general simplicial complexes. To state it precisely, we need some preparation.
For a graph G, the independence complex I(G) of G is the simplicial complex whose simplices are the independent sets of G. The line graph L(G) of G is the graph whose vertex set is E(G), and two distinct edges e and e ′ of G are adjacent if and only if they have a common endpoint. Then the matching complex M (G) coincides with the independence complex of the line graph L(G). Figure 1 For a pair m and n of positive integers, Braun and Hough [4] introduced the graph ∆ m n , which is a generalization of L(Γ n ). The vertex set of ∆ m n consists of e i for i = 1, · · · , n and f k i for i = 1, · · · , n − 1 and k = 1, · · · , m. The adjacent relations are given as follows: , (i = 1, · · · , n − 1) Figure 2 depicts the graph ∆ 4 5 . Clearly, ∆ 2 n and L(Γ n ) are isomorphic, and hence I(∆ 2 n ) and M (Γ n ) are isomorphic.
Braun and Hough [4] actually studied the independence complexes of ∆ m n . The purpose of this paper is to determine the homotopy types of the independence complex of ∆ m n . The following two theorems are the main results in the present paper.
Here Σ denotes the reduced suspension. In particular, we have Figure 2.
By Theorem 1.2, the homotopy type of I(∆ m n ) is recursively determined by I(∆ m 1 ), · · · , I(∆ m 4 ). In Section 4, we show that all these complexes are wedges of spheres, and hence we have the following theorem: n is a wedge of spheres for positive integers m and n. In particular, the matching complex M (Γ n ) of Γ n is homotopy equivalent to a wedge of spheres.
In particular, the homology groups of the independence complex of ∆ m n has no torsions. This gives an answer to a problem suggested by Braun and Hough (see the end of [4]).
This paper is organized as follows. In Section 2, we review some facts concerning independence complexes. Since Theorem 1.1 is easily deduced from known results, we discuss it in this section. Theorem 1.2 and Theorem 1.4 are proved in Section 3 and Section 4, respectively.
Preliminaries
We refer to [7] and [9] for fundamental terms and facts concerning simplicial complexes.
We first recall the following simple observation of independence complexes (see [1] for example). For a vertex v of G, the link of v in I(G) coincides with Here I(G) \ v denotes the subcomplex of I(G) whose simplices are the simplices of I(G) not containing v. This observation clearly yields the following proposition: is null-homotopic, then we have Proof. By the above observation, it suffices to see that Here we give the proof of Theorem 1.1 since it easily follows from Proposition 2.2.
See Figure 3. In this section, we prove Theorem 1.2. Throughout this section, we assume that m is an integer greater than 1. Suppose n ≥ 2, and put X n = ∆ m n \ e n−1 . Since N ∆ m n (e n ) ⊂ N ∆ m n (e n−1 ), Proposition 2.2 implies the following: Lemma 3.1. For n ≥ 2 and m ≥ 2, we have I(∆ m n ) ≃ I(X n ).
Next we consider the graph Y n = X n \ e n−2 (see Figure 5). Figure 5 is a homotopy equivalence. Note that every vertex of . This means that the composite is null-homotopic. Here we use the assumption m ≥ 2 to show that the first map is a homotopy equivalence. Thus the inclusion I(X n \ N Xn [e n−2 ]) → I(Y n ) is null-homotopic, and this completes the proof.
Next we study the homotopy type of I(Y n ) Proof. We want to apply Proposition 2.1 to the vertex e n of Y n . Namely, we must show the following: (1) The inclusion I(Y n \ N Yn [e n ]) ֒→ I(Y n \ e n ) is null-homotopic.
(2) The homotopy type of I(Y n \ N Yn [e n ]) is Σ m I(∆ m n−4 ). (3) The homotopy type of I(Y n \ e n ) is Σ m I(∆ m n−3 ). Define the induced subgraphs Z n , Z ′ n , and Z ′′ n of Y n as follows: the commutative diagram that the inclusion I(Z n ) ֒→ I(Y n ) is null-homotopic. By the sequence of inclusions, we have that I(Y n \ N Yn [e n−3 ]) ֒→ I(Y n ) is null-homotopic. This completes the proof of (1). Finally, we prove (3). By Proposition 2.2, it is easy to see that I(Y n \ e n ) is homotopy equivalent to I(W n ) (see Figure 6). Here W n is defined by . This completes the proof of (3 In this section, we prove Theorem 1.4, which asserts that I(∆ m n ) is a wedge of spheres. The case m = 1 is proved by Theorem 1.1, and hence we assume m ≥ 2 in the rest of this section. It follows from Theorem 1. For m ≥ 2, the complexes I(∆ m 1 ), · · · , I(∆ m 4 ) are described as follows: Proof. Note that I(∆ m 1 ) is a point. It clearly follows from Proposition 2.2 that I(∆ m 2 ) ≃ I(K 2 ) = S 0 .
Consider the case of n = 3. By Lemma 3.1, we have that I(∆ m 3 ) ≃ I(X 3 ). Braun and Hough determined the homotopy types of the independence complexes of X 3 (see Lemma 3.2 of [4]), but we give an alternative proof of this result for self-containedness. First Proposition 2.2 implies that I(X 3 \ e 3 ) and I(X 3 \ {e 1 , e 3 }) are homotopy equivalent. Since X 3 \ {e 1 , e 3 } is the m-copies of K 2 , we have I(X 3 \ e 3 ) ≃ I(X 3 \ {e 1 , e 3 }) = S m−1 .
On the other hand, applying Proposition 2.2 again, we have that I(X 3 \ N X 3 [e 3 ]) and I(K 2 ) = S 0 are homotopy equivalent. Since every map from S 0 to S m−1 is null-homotopic, the inclusion I(X 3 \ N X 3 [e 3 ]) ֒→ I(X 3 \ e 3 ) is null-homotopic. Thus Proposition 2.1 implies I(X 3 ) = S 1 ∨ S m−1 .
Finally we consider the case n = 4. By Proposition 3.2 and I(∆ m 1 ) = * , we have that I(X 4 ) ≃ I(Y 4 ). By Proposition 2.2, I(Y 4 \ e 4 ) is homotopy equivalent to the independence complex of the disjoint union of one isolated vertex and m-copies of K 2 , and hence contractible. In particular, the inclusion I( | 2,055.2 | 2018-12-28T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Transformational optics of plasmonic metamaterials
Plasmonic metamaterials provide a convenient experimental platform for demonstration of principles of transformational optics. In this paper, negative index imaging and guiding of surface waves using layered plasmonic metamaterials has been demonstrated. Imaging experiments have been compared with numerical simulations. In addition, a two-dimensional ε near zero metamaterial has been realized.
Introduction
Recent observations of negative refraction of surface plasmon polaritons (SPPs) in the visible frequency range [1,2] may soon bring about practical applications of the superlens concept originally introduced by Pendry [3]. It appears that regular dielectric materials deposited onto the metal film surface (such as gold or silver) may be perceived by SPPs as negative refractive index materials. It is important to note that the negative refractive index behavior in this case is omni-directional, and the figure of merit Re(k)/Im(k) of these two-dimensional (2D) negative index materials may reach values of the order of 10 [4]. This relatively high value compared to the typical figures of merit of the 3D negative index metamaterials [5] makes it possible to utilize 2 2D negative index materials in practically useful nano-optical devices. Negative refraction may find applications in novel linear and nonlinear nano-optical devices, which bypass the diffraction limit of conventional optics. While a very large number of theoretical papers on electromagnetic properties of negative index materials and devices exist (see for example an overview in [6]), experimental observations in the visible frequency range are limited to a few demonstrations of negative refraction at the positive-negative interface [2,4], measurements of transmission and reflection from very thin layers of these materials (see [5] and the references therein), and rudimentary imaging experiments [1]. Much more experimental work is needed to firmly establish foundations and practical limitations of the novel and exciting theoretical ideas in this field. In this paper, we report the experimental observation of magnified negative index imaging, which occurs at the boundary between the negative and the positive index media, provide the experimental demonstration of surface wave guiding based on the transformational optics approach, and demonstrate an ε near zero plasmonic metamaterial.
Negative refraction and imaging by plasmonic metamaterials
Our approach to fabrication of plasmonic metamaterials in the visible frequency range is based on 2D optics of SPPs [7]. The wave vector of SPPs propagating over a metal-dielectric interface is defined by the expression where ε m (ω) and ε d (ω) are the frequency-dependent dielectric constants of the metal and dielectric, respectively. Above the resonant frequency described by the condition ε m (ω) ∼ −ε d (ω) the SPP group and phase velocities may have opposite signs, which leads to the negative refractive behavior of SPPs [1,2]. This behavior is illustrated in figure 1(a), which shows the typical SPP dispersion law for a metal surface covered with a thin layer of dielectric (see for example [8]). Since in ambient conditions all surfaces are covered with a thin (a few nm thick) layer of adsorbed water (this layer is responsible for the shear force between the tip and the sample in a near-field optical microscope [9]) this behavior of the SPP dispersion is generic. In addition, when the metal film is not too thick, the SPPs which live at the top and at the bottom surfaces of the film interact with each other, which leads to the appearance of the so-called long-range surface plasmon polariton (LSPP) and short-range surface plasmon polariton (SSPP) modes [7], the latter ones typically exhibiting negative refractive behavior. Thus, regular optical materials deposited onto a gold film surface may be perceived by SPPs as negative refractive index materials. However, as illustrated in figure 1, in a typical situation a layer of such material would behave as a high-pass filter: the low-k components of the SPP source field would perceive the deposited material as a positive index one, whereas the high-k components of the SPP source field would see it as a negative index material. Nevertheless, as illustrated in figures 1(b)-(d) the loss of low-k components of the field does not lead to deterioration of the spatial resolution. An interface between two media with opposite signs and unequal magnitudes of refractive index must exhibit magnified 'mirror' imaging, as shown in figure 2(a). The magnitude of the expected magnification is equal to M ∼ n 2 /n 1 . We were able to detect this effect by looking at the spatial distribution of SPP scattering inside a poly(methyl methacrylate) (PMMA) layer using a far-field optical microscope. The test sample geometry is shown schematically in figure 2(b). The triangular test patterns were formed near the edge of the PMMA layer using e-beam lithography, as described in [1]. The triangles were made of PMMA bars formed on top of the gold film surface in such a way that the spacing between the PMMA bars would allow phase-matched excitation of SPPs on the gold/air interface under illumination with 532 nm laser light (see figure 1 from [1]). Figure 2(d) demonstrates that magnified 'mirror' images of the original triangular patterns were indeed formed inside the PMMA layer. These images were observed using a conventional far-field optical microscope due to SPP scattering into photons by the defects of the PMMA layer. The observed magnification was approximately equal to 1.6, which corresponds to the magnitude of the effective 2D refractive index of PMMA: n PMMA ∼ −1.6 [1]. The contrast in the images deteriorates away from the edge of the PMMA layer due to SPP losses. Numerical modeling of SPP propagation in this geometry, which is presented in figure 3(f), demonstrates good agreement with the experiment. On the other hand, no 'mirror' imaging has been observed when the test pattern was illuminated with 633 nm laser light (figure 3(e)), which is consistent with the absence of negative index SPP modes at this wavelength. This experiment provides additional strong evidence of negative refractive behavior of SPPs around the 500 nm wavelength.
Transformational optics demonstration
The transformational optics approach [10]- [12] allows one to control the path of light by creating complex spatial distributions of dielectric permittivity ε(r ) and magnetic permeability µ(r ), which emulate propagation of electromagnetic fields in some curvilinear space time. A practical approach to transformational optics may be based on multilayer stacks of positive and negative index materials. If the stack is compensated so that the optical width of the stack is approximately equal to zero, light rays propagate through the stack in the direction perpendicular to the layers [4]. Curving the stack allows one to redirect the light rays. This approach has been demonstrated experimentally in the design of a 'magnifying superlens' [1]. In this design a concentric pattern of PMMA rings formed on the gold film surface operated as a concentric pattern of positive and negative refractive index materials. This pattern guided SPP rays emitted by point sources in the radial directions. Our numerical simulations presented in figures 3(a) and (b) indicate that the same approach may be used to guide SPP rays along arbitrary curvilinear paths. The geometry of layers shown in figures 3(a) and (b) is obtained by stretching the 'magnifying superlens' geometry along the horizontal axis. This deformation leads to the curvilinear propagation of the rays from the point sources located near the internal rim of the structure. Experimental realization of this geometry is demonstrated
Demonstration of ε near zero plasmonic metamaterial
Since both the effective refractive index and the effective 2D 'dielectric constant' ε 2D of PMMA from equation (1) may change sign near the frequency of surface plasmon resonance ε m (ω) ∼ −ε d (ω), the effective average ε 2D of the layered plasmonic metamaterial may be made very close to zero by adjusting the widths d 1 and d 2 and the periodicity d 1 + d 2 of the multilayered structure. Figure 4 represents an experimental step in this direction. The d 1 and d 2 parameters of the concentric PMMA ring structure shown in figure 4(a) were varied gradually to produce the ε near zero condition at some distance from the center. This experimental approach appears to work regardless of the theoretical model used to describe the plasmonic metamaterial. The structure was illuminated by an external laser operating at 532 nm, at an illumination angle which corresponds to the phase-matching excitation of SPPs at the top and bottom outer rims of the structure. Since the periodicity of the structure changes away from the outer rim, SPPs are excited with much less efficiency anywhere else, and the picture of light scattering presented in figure 4(b) corresponds to SPP propagation through the central region of the structure. SPPs are supposed to experience total internal reflection from the ε near zero boundary, as illustrated in figure 4(c), unless they propagate exactly through the center of the structure. Comparison of the experimentally measured figure 4(b) with our theoretical simulations shown in figure 4(c) and with results of numerical simulations of Alu et al (figure 11 from [13]) strongly suggests that ε near zero conditions have been created. The similarity of these figures is quite compelling. Note that in simulations shown in figure 4(c) ε = 0.1 is used, which means that the phase and amplitude of the wave passing through the cylinder has not yet become completely uniform. The narrow beam passing through the center of the ε near zero structure may be used to produce efficient spatial directional filters. Figure 4(b) also demonstrates that most of the optical energy is trying to flow around the ε near zero area, in a manner which is similar to that described in [10]- [12]. Similar results can also be found in [14]. We should also note that theoretical simulations performed for a single dielectric step often indicate the strong coupling of 2D SPPs to radiating photons [15]. It should be noted that when multiple nearly periodic dielectric steps are involved, the interference of radiation from the steps may become either constructive or destructive. In the destructive interference case, the coupling of SPPs to 3D radiation should become suppressed. The experimental results presented in the paper can be explained by such destructive interference.
Conclusion
In conclusion, we have demonstrated that plasmonic metamaterials provide a convenient experimental platform for demonstration of the principles of transformational optics. In this paper, the negative index imaging and guiding of surface waves using layered plasmonic metamaterials has been demonstrated. Imaging experiments have been compared with numerical simulations, and a good match has been obtained. In addition, a 2D ε near zero metamaterial has been demonstrated experimentally. | 2,447.2 | 2008-11-01T00:00:00.000 | [
"Physics"
] |
Supercolossal Uniaxial Negative Thermal Expansion in Chloranilic Acid Pyrazine, CA-Pyz
: There has been signi fi cant recent interest in exploiting the large dimension changes that can occur in molecular materials as a function of temperature, stress, or under optical illumination. Here, we report the remarkable thermal expansion properties of chloranilic acid pyrazine co-crystals. We show that the compound shows uniaxial negative thermal expansion over a wide temperature range with a linear contraction coe ffi cient as low as ( − )1500 × 10 − 6 K − 1 at 250 K. The corresponding 10% contraction between 200 and 300 K is an order of magnitude larger than in the so-called colossal contraction materials. We adopt a symmetry-inspired approach to describe both the structural changes that occur (using rotational symmetry modes) and the thermal expansion (using strain modes). This allows an extremely compact description of the phase transition responsible for this unusual behavior and gives detailed understanding of its atomic origins. We show how the coupling of primary and secondary strain modes in materials showing extreme expansion and contraction can lead to unusual reversals in the temperature dependence of cell parameters.
■ INTRODUCTION
There has been significant interest in the physical properties of crystalline organic and hybrid materials in recent years, particularly in some of the mechanical properties traditionally studied in metallic, ceramic, and polymer engineering materials. 1, 2 The weaker bonding interactions present between organic molecules often enhance the magnitude of important properties, and their design-flexibility makes them a rich arena for discovery. Much of the interest in the field is driven by the possibility of creating stimulus-responsive materials that display macroscopic changes and can act as actuators or artificial muscles. 3−5 For example, there has been a number of reports over the years of "bending crystals", which undergo dramatic shape changes either on heating or on the application of mechanical stress, some of which have been recently reviewed. 3,4 This behavior is usually linked to the evolution of twin structures associated with phase transitions and, as such, closely related to the phenomena that lead to effects such as shape memory and superelasticity in metal alloys like nitinol. There has, therefore, been significant interest in martensite-like transitions in organic crystals; 6−12 recent examples of materials exploiting these include the superelastic terephthalamide 13 and the organic shape memory material (P n Bu 4 ) + (BPh 4 ) − . 14 Mechanical twinning leads to effects such as ferroelasticity and superelasticity, and a number of molecular systems have been reported with this property. 15−21 Finally, N,N-dimethyl-4-nitroaniline crystals have recently been shown to display superplasticity. 22 Herbstein provides a useful overview of the mechanism of organic transformations in the solid state, in particular summarizing the classic works of Ubbelohde and Mnyukh. 23 The weaker interactions in molecular materials can also lead to more extreme behavior under changes in temperature than in conventional materials. This is exemplified by so-called negative thermal expansion (NTE) materials, which show the unusual property of contracting in volume on heating. 24 Most of the "classic" phonon-driven oxide NTE systems such as ZrW 2 O 8 , 25 Sc 2 (WO 4 ) 3 , 26 and zeolites 27 have negative linear contraction coefficients that are of comparable magnitude to the positive coefficients of normal materials (α l ≈ −20−0 × 10 −6 K −1 ). ZrW 2 O 8 , for example, has α l = −9 × 10 −6 K −1 from 2 to 300 K corresponding to a percentage contraction Δl/l ≈ −0.1% over a 100 K range. 28 The term "colossal negative thermal expansion" was introduced to define materials with α l more negative than −100 × 10 −6 K −1 and hence a Δl/l contraction larger than 1% over 100 K. The first such material was Ag 3 [Co(CN) 6 ], which showed a uniaxial NTE of −130 × 10 −6 K −1 ; this extreme contraction is related to the inherent flexibility of its framework structure, in which the volume is defined by a weak Ag−Ag argentophilic interaction. 29 In essence, this soft interaction leads to a large positive expansion in two dimensions, with the framework topology driving a contraction along the perpendicular direction. Colossal isotropic NTE was subsequently reported in other framework systems such as the UiO-66 MOF. 30 Mullaney et al. have also shown how spin crossover can switch the geometry of flexible frameworks; by spin dilution, they could prepare materials with continuous cell parameter changes of +6/−4% over a 60 K range. This is strongly reminiscent of the major "breathing" changes in the geometry of metal organic frameworks such as MIL-5 on gas uptake. 31 Extreme thermal expansion or contraction coefficients are also found in purely organic crystals where weaker intermolecular forces dominate. Some of the most dramatic are in the thermosalient 3,32 or "jumping crystals" such as (phenylazophenyl) palladium hexafluoroacetylacetonate 33 with α a = +260.4 × 10 −6 K −1 between 223 and 348 K and N′-2propylidene-4-hydroxybenzohydrazide, 34 with α a/b ≈ +230 × 10 −6 K −1 and α c ≈ −290 × 10 −6 K −1 over the temperature range 298−373 K. (S,S)-octa-3,5-diyne-2,7-diol 35 has been reported as a biaxial NTE material with α a = +450 × 10 −6 K −1 , α b = −130 × 10 −6 K and α c = −250 × 10 −6 K − giving a relatively low overall volume expansion α V = +60 × 10 −6 K −1 (all 240−330 K). Methanol monohydrate, in contrast, has a large volume expansion α V = +500 × 10 −6 K −1 that causes a more modest contraction in one dimension α a = −60 × 10 −6 K −1 (both at 155 K) along with negative linear compressibility. 36 Here, we report the remarkable thermal expansion properties of a chloranilic acid pyrazine (CA-Pyz) co-crystal associated with a newly-discovered phase transition. The molecular structure and views of the three-dimensional packing (which will be discussed in detail later in the text) are shown in Figure 1.
Powder and single-crystal diffraction studies have shown that this material undergoes a phase transition which leads to extreme distortions of its unit cell. Its volume change from 200 to 300 K is one of the largest known for an organic crystal, and it shows extreme anisotropy in the contraction and expansion of its a and b cell parameters. Average linear thermal expansion coefficients approaching ±1000 × 10 −6 K −1 are observed from 200 to 300 K corresponding to 10% length changes, an order of magnitude larger than required by the definition of "colossal".
Our structural studies and the application of rigid-body rotational symmetry modes allow us to understand and describe the molecular origins of this fascinating behavior. We also present a simple model to explain the unusual temperature evolution of cell parameters that can arise in materials showing such extreme thermal expansion.
■ EXPERIMENTAL SECTION
Sample Preparation. Single crystals of CA-Pyz were obtained by slow evaporation of an acetonitrile solution of chloranilic acid (Aldrich, 98%) and pyrazine (Aldrich, 99%) in a 1:1 molar ratio. Chloranilic acid (0.0627 g, 0.3 mmol) and pyrazine (0.0240 g, 0.3 mmol) were dissolved separately in the minimum volume of acetonitrile (typically 6 and 1 mL, respectively) with gentle heating for 30 min. Orange-red rhombusshaped crystals appeared after 2 days. Polycrystalline samples were produced by gentle grinding of crystals.
Single-Crystal Diffraction. Single-crystal X-ray diffraction data were acquired on an Oxford Gemini S Ultra diffractometer equipped with a CCD area detector, using Mo Kα radiation (λ = 0.71073 Å). Data were collected at 300, 275, 250, 230, 200, and 150 K on both cooling and warming. A Cryostream Plus Controller was used to control the experimental temperature. Crystal structures were solved using SIR92 37 within the CRYSTALS 38 software. All nonhydrogen atoms were refined anisotropically against F hkl 2 . All hydrogen atoms were located by difference Fourier maps and refined isotropically without any restraints. Single-crystal neutron diffraction data were collected on the SXD instrument of the ISIS Neutron and Muon Source at the Rutherford Appleton laboratory. The crystal was attached to a goniometer head using adhesive Al foil and placed in a rotating bottom-loading CCR. Data were collected at 295 K, in 10 crystal orientations for 3 h each. Initial data reduction was done using the SXD2001 software within the IDL virtual machine, and refinement of the structural model was carried out in Jana2006. 39 The fractional coordinates and anisotropic atomic displacement factors were refined for all atoms against F hkl 2 . Translation, libration, screw (TLS) tensors were derived from experimental anisotropic displacement parameters in Platon. 40 Powder Diffraction. Synchrotron powder X-ray diffraction data were collected at a beamline I11 at Diamond Light Source in highresolution mode using the 45 Multi-Analysing Crystal detector and a wavelength λ = 0.8257653 Å (calibrated against a silicon standard NIST 640c). The sample was loaded into a 0.7 mm external-diameter quartz capillary to a length of 30 mm. The capillary was sealed and attached to a brass holder, which rotated during the measurements. Variable temperature data were collected every 4 K between 180 and 360 K. Laboratory powder X-ray diffraction data were collected in capillary mode on a Bruker D8 diffractometer equipped with a Lynx-Eye detector and an Oxford Cryosystems Cryostream Plus device, using Mo Kα radiation. The sample was warmed from 150 to 400 K at a rate of 15 K/h. A series of 20 min datasets were recorded over a 2θ range of 1−30°u sing a step size 0.01°. All powder diffraction data were analyzed using the TOPAS Academic software. 41−43 Solid-State NMR and Thermal Analysis. 1 H solid-state NMR was recorded at 205 and 308 K with a 15 kHz spinning rate on a Bruker Advance III HD instrument. Differential scanning calorimetry (DSC) experiments were performed on a PerkinElmer Pyris 1 DSC instrument. The sample (2.831 mg) was heated from 120 to 520 K at rates between 10 and 100 K/min.
Symmetry-Mode Analysis. Symmetry-adapted distortion modes belonging to the irreducible representations of the parent-symmetry group, commonly referred to as "symmetry modes", were employed here to describe global patterns of rigid-body rotations. For brevity, we refer to such patterns simply as "rotational symmetry modes".
Rotational symmetry modes were defined for a dummy pivot atom placed at the center of inversion of each molecule. Command files containing both a rotational symmetry-mode description and a more familiar displacive symmetry-mode description were generated in a format suitable for TOPAS Academic. Rotational symmetry-mode amplitudes were refined directly using the laboratory powder diffraction data with displacive symmetry modes fixed at zero. Rotational Figure 1. Molecular structure of chloranilic acid (CA) and pyrazine (Pyz), and views of the CA-Pyz co-crystal structure at 300 K along a, b, and c (left to right). The monoclinic high-temperature (HT) unit cell is shown in gray. The bold blue "diamond" in the view of the ab plane shows the low-temperature triclinic cell discussed in the text and used for most analysis. Heavy black and blue arrows show directions of rotational symmetry modes for CA and Pyz, respectively. A chain of hydrogen-bonded CA-Pyz molecules is highlighted in red. symmetry-mode amplitudes were extracted from single-crystal experiments by minimizing the distances between all atoms in a rotational symmetry-mode description with displacive symmetry-mode amplitudes fixed at zero (based on the 300 K single-crystal structure) and the conventionally refined fractional atomic coordinates. The sensitivity of extracted rotational symmetry-mode amplitudes to internal distortions of molecules was assessed by also refining displacive symmetry-mode amplitudes to give a perfect fit to the conventional coordinates but with displacive amplitudes restrained to values as close to zero as possible such that rotational symmetry modes described the majority of the atomic movement. All symmetry-mode calculations were performed using the ISODISTORT software. 44
■ RESULTS AND DISCUSSION
Observation and Analysis of a Structural Phase Transition in CA-Pyz. Figure 2 shows powder diffraction data recorded on warming and subsequent cooling of CA-Pyz from 180 to 360 to 180 K (equivalent lab data used in Figure 4b are given in the Supporting Information (SI) Figure S1). It is immediately apparent from these data that CA-Pyz undergoes a phase transition at around 300 K, and the extreme splitting of certain hkl reflections suggests that the transition is associated with significant changes in the unit cell parameters. We see no significant features in the DSC at this temperature, but this is perhaps unsurprising given the wide temperature range over which the transition occurs. The 150 K data can be indexed with a triclinic unit cell with a = 4.78 Å, b = 5.83 Å, c = 10.71 Å, α = 82.0°, β = 81.5°, γ = 77.0°, V = 286 Å 3 . Figure 1 shows the relationship of this triclinic cell (bold blue lines) to the previously reported monoclinic hightemperature cell (gray lines). Figure 3 shows unit cell parameters extracted by Pawley fitting, 45 using both the warming and cooling datasets. On warming, we see some peak broadening just below the phase transition temperature T C (300 K), which was modeled using just 4 of the 15 allowed terms of a 4th-order spherical harmonic to describe anisotropic microstrain. 41,46 The magnitudes of the coefficients obtained are included in the SI ( Figure S2).
Chemistry of Materials
At high temperature, the Bragg peaks are essentially as sharp as at low temperature. On cooling, the phase transition is fully reversible, though there is a subtle thermal history-dependent splitting of some of the "tuning-fork" reflections just below the phase transition temperature. By tuning fork, we mean reflections such as the ∼9.5°2θ peak ((11−1) at high T; (011) and (101) at low T), which undergo particularly marked splitting, resulting in the reflections that are equivalent at high temperature being separated by several degrees in 2θ at low temperature. The cooling data can be Pawley-fitted by using three closely related phases just below T C , and we show their unit cell parameters as open points and crosses in Figure 3. Importantly, the sample regains sharp, essentially unstrained peaks at low temperature where the powder diffraction pattern can again be described using a single phase. This is demonstrated, for example, by the convergence of cell parameters of the three Pawley-fit phases to identical values at low temperature. Similar effects have been reported in other high-anisotropy systems such as MIL53 47,48 and could be caused by intergrain effects or the significant microstrain build-up between ∼220 and 300 K, which is shown in the SI ( Figure S2).
Structure of the New (LT) Form of CA-Pyz. To gain detailed insight into the structural changes associated with the phase transition, single-crystal X-ray diffraction experiments were performed at 6 temperatures between 300 and 150 K (on both warming and cooling). The crystallographic parameters are summarized in Tables S1 and S2 and cell parameters plotted in Figure S3. At 300 K, CA-Pyz was found to adopt the previously reported monoclinic structure 49 in space group C2/m with unit cell parameters of a = 8.296(5) Å, b = 6.654(2) Å, c = 10.993(2) Å, β = 101.60(3)°and V = 594.44(11) Å 3 . We refer to this as the high-temperature (HT) form of CA-Pyz. Below this temperature, a new triclinic form (LT form, space group P1̅ ) was observed and its structure was solved from the 150 K data set. The molecular packing in the two forms is very similar, facilitated by a one-dimensional network of O−H···N hydrogen bonding (see Figure 5), which gives rise to the CA-Pyz chains highlighted in red in Figure 1. With the donor−acceptor distance of d ON ≈ 2.7 Å, these are medium-length O−H···N hydrogen bonds. 50 Using the C−O bond lengths (determined from X-ray diffraction more reliably than those involving hydrogen atoms) as a proxy for detecting potential temperature-induced proton migration, we find no evidence of this type of dynamics; these bond lengths remain at about 1.31 Å between 150 and 300 K ( Figure S4), suggesting that the single bond character is retained throughout this temperature region. 51 Similarly, we see no evidence to suggest significant distortions of individual molecules.
Single-crystal neutron diffraction data recorded at 295 K allowed accurate determination of the hydrogen atom positions and anisotropic atomic displacement parameters (ADPs). A view of the ADPs at all temperatures is shown in Figure S5 and for the 295 K structure in Figure 5. The most striking structural feature is the magnitude and the orientation of the ADPs on the atoms in the pyrazine molecule. Ishida and Kashino 49 noted large anisotropic displacements of the C4 atom perpendicular to the pyrazine ring, but their attempts to model these by disordering the carbon atoms around a pseudo two-fold rotation axis passing through the N atoms (i.e., as static disorder) were not successful. This, together with our observation of large Hatom ADPs, suggests a very prominent librational motion perpendicular to the pyrazine ring at a high temperature. As an approximation, a translation, libration, screw (TLS) analysis of the experimental neutron ADPs 52 gives eigenvalues of the librational tensor of the Pyz molecule of ∼22, 8, and 6°and translations of 0.36, 0.23, and 0.1 Å 2 .
Further evidence for the dynamics in the HT form of CA-Pyz comes from 1 H solid-state NMR data recorded at 205 and 308 K ( Figure S6). Both OH and Pyz proton signals undergo significant narrowing on warming: from 950/1490 to 550/600 Hz, respectively. The CA OH chemical shift changes by ∼0.5 ppm indicating a minor change in the proton environment and hydrogen bond network between the two phases.
Rotational Symmetry-Mode Description of the Phase Transition in CA-Pyz. The power of using symmetry-adapted distortion modes belonging to the irreducible representations of the parent-symmetry group (a.k.a. symmetry modes) to describe the order parameters that arise in symmetry-reducing phase transitions is now widely appreciated. 44,53−56 Key advantages are that the essence of the phase transition can often be captured by a small number of parameters (mode amplitudes), each of which quantifies a structural distortion that breaks the parent symmetry in a unique way, and each of which is defined to be zero in the high symmetry parent phase. Symmetry-mode parameters, thus, tend to have clear physical interpretations and origins.
The full parameterization of an arbitrary crystal−crystal phase transition in terms of symmetry modes was initially accomplished for atomic displacements, which are polar vectors. 55−57 The group theoretical calculations required for the construction of displacive symmetry modes are available within the ISODISTORT software package. 44 ISODISTORT also includes symmetry modes to describe the ordering of atomic species (occupational order−disorder), magnetic moments (timereversible axial vectors), and, most recently, rotational moments (non-time-reversed axial vectors). A rotational symmetry-mode approach can give a very compact description of an entire pattern of rotation vectors, where each vector rotates a group of atoms, such as a rigid molecule or polyhedral unit. Note that the three components of a molecular rotation vector capture the same information as would be specified by three rotational angles (e.g., around three orthogonal axes) in a conventional rigid-body description. The rotation vector is more convenient, because it can be directly visualized (see below) and because there are no arbitrary conventions to consider regarding the order of the rotations around different axes. A rotation vector's orientation defines the rotation axis, and its length defines the angle of rotation. 58 These ideas have previously been applied to describe the lowsymmetry structure of the extended solid RbBr 3 Mg(H 2 O) 6 , 59 and, more recently, the unusual high-to-low-symmetry phase transition (on warming) of the molecular ferroelectric 5,6dichloro-2-methylbenzimidazole 60 and the cooperative motions of networks of interconnected rigid units. 61 The refined structures from single-crystal diffraction experiments allow us to develop a straightforward description of the child LT phase of CA-Pyz in terms of the rotational symmetry modes belonging to the Γ 2 + irrep of the parent HT phase. The Γ 2 + irrep of space group C2/m results in a P1̅ subgroup of index 2 with basis {(1/2, 1/2, 0), (1/2, −1/2, 0), (0, 0, −1)} 62 and origin at (0,0,0) relative to the conventional parent cell (the blue cell of Figure 1). The child has 36 displacive symmetry modes (spanning the same space as the 36 free xyz atom-position parameters in the conventional description), 24 of which (the Γ 1 + modes) describe structural degrees of freedom already present in the parent. Alternatively, if we assume rigid molecules, which merely rotate around their respective inversion centers, we need only consider 6 rotational symmetry modes (spanning the same space as the 6 free xyz components of the two molecular rotation vectors). The LT phase has a Γ 1 + and two Γ 2 + irrep modes for each molecule, which are illustrated in Figure 1 For small rotations of rigid molecules, these 6 parameters should allow us to capture the key features of the phase transition. If additional molecular distortions occur, these can be described by refining additional displacive symmetry-mode amplitudes as internal degrees of freedom within the "semirigid" bodies. These internal modes can also be used to correct for distortions caused by large changes in cell metric when rigid bodies are defined using parent fractional coordinates. If one attempts to simultaneously use both 6 rotational symmetry modes and 36 displacive symmetry modes to describe the structure, then the description will, of course, be overparameterized. This is corrected by removing 6 appropriate linear combinations of displacive symmetry modes that result in pure rotations. Alternatively, one can use soft restraints to keep unnecessary displacive symmetry-mode amplitudes close to zero during refinement, forcing the rotational symmetry modes to "do most of the work" in describing atomic coordinate shifts. We adopt this latter approach for part of our analysis.
Chemistry of Materials
Rotational symmetry-mode amplitudes, extracted by fitting the rotating rigid bodies (as defined using the 300 K high symmetry parent structure) to the refined conventional atomic coordinates from lower temperature single-crystal datasets, are shown in Figure 4a. Figure 4b shows equivalent mode amplitudes refined directly from relatively low-quality laboratory powder diffraction data. The good qualitative and quantitative agreement between Figure 4a and 4b shows that this restricted parameter set (in contrast to traditional xyz refinements) can be reliably extracted from low-quality laboratory powder diffraction data. We see from these plots that Γ 2 + rotations (circles) have by far the highest magnitude and are, therefore, the most important in describing and understanding the phase transition. Pyz molecular rotations (open markers) are also much more important than those of the CA molecule (closed markers). Figure 4d displays the resulting rotation vectors for each molecule. For Pyz, we see that the large ∼16°r otation is almost aligned with the N−N vector. For CA, the smaller ∼4°rotation is approximately perpendicular to the CO vector and slightly out of the molecular plane. We emphasize that the mode amplitudes of Figure 4a, or their simpler representation in Figure 4g, describe the structural changes that occur due to the phase transition. From the temperature dependence of r 4 , we see that the effects of the transition are apparent from ∼150 to 300 K.
Chemistry of Materials
As with any rigid-body description, there are approximations in this approach such that the rigid-body coordinates will be slightly different to those from free refinement. Some differences arise from the fact that at high temperatures, the relatively rigid molecules undergo correlated vibrational motions leading to the well-known underestimation of bond lengths when using harmonic displacement parameters. 63 The magnitude of this effect can be appreciated from the left-hand panels of Figure 5, which show anisotropic displacement parameters extracted from neutron single-crystal diffraction refinements. These lead to the HT Pyz molecule showing shorter apparent bond lengths than at low temperature. Our TLS analysis of the neutron-derived structure gives observed and corrected C−N distances of 1.275 and 1.360 Å, respectively. These compare to bond lengths of 1.269 and 1.346 Å obtained from single-crystal X-ray data at 300 K and 150 K, respectively, confirming the librational shortening. Distortions due to the extreme changes in cell metric observed in this particular compound also lead to atoms being up to 0.1 Å from true positions in the molecular rotation description. If internal displacive symmetry-mode amplitudes are refined to give perfect atomic overlap with a restraint applied so that as much of the atomic displacements are captured by rotational symmetry modes as possible, the values of r 1 to r 6 (as shown in Figure 4a) change very little. The rotating rigid-body model, therefore, captures most of the key structural changes in a small number of parameters that can be routinely and reliably determined.
Remarkable Thermal Expansion Behavior of CA-Pyz. We can adopt a similar language to describe the cell parameter changes relative to 300 K that occur on cooling. This involves using strain modes, which transform as polar symmetrized second-rank tensors. The six degrees of freedom of the lowtemperature cell appear as four parent-allowed Γ 1 + strain modes (labeled here s 1 −s 4 ) and two parent-symmetry-breaking Γ 2 + modes (s 5 and s 6 ); their amplitudes are shown in Figure 4c. As expected, Γ 2 + strains refine to essentially zero above T c = 300 K for the monoclinic HT phase but show a marked deviation from 0 below 300 K, with the Γ 2 + s 5 strain reaching a remarkable −0.14 (−14%) by 180 K. It also seems that the Γ 1 + strains show a different temperature dependence than those of Γ 2 + irrep, with significant deviations from their high-temperature extrapolated behavior only appearing below T ≈ 260 K. The Γ 2 + s 5 mode also displays a clear discontinuity at this lower temperature.
The influence of the strain modes on individual cell parameters depends on the cell setting used. To properly decouple the physical changes of the crystal from the cell setting, we have calculated thermal expansion along the principal axes of the expansion tensor. Figure 4e−g shows expansion indicatrices considering just Γ 1 + , just Γ 2 + , and Γ 1 + + Γ 2 + overall strains and their relationships to the parent cell axes. As anticipated from Figure 2, we see extremely large negative thermal expansion (blue) close to the parent b-axis, extremely large positive expansion (red) close to the a-axis, and more normal (though still large) positive expansion close to the c-axis. Average thermal expansion coefficients over a 200−300 and 250−300 K temperature range are given in Table 1, and their full temperature evolution is plotted in Figure 3e. We see that the extreme uniaxial contraction (α l = −1500 K −1 at 250 K) is dominated by Γ 2 + strains, and that the dominant effect of Γ 1 + is an expansion of the interlayer c-axis. If we use a nonstandard C1̅ setting for the child structure, we, of course, see the same temperature dependence of strain modes, but they manifest in a different temperature dependence of cell parameters (see SI Figure S7 and discussion below). In this alternative description, the large −14% amplitude of the s 5 Γ 2 + mode predominantly influences the cell angle γ, which changes from 90 to ∼101.5°between 300 and 150 K.
Chemistry of Materials
The trigonometric dependencies of the individual child-cell parameters on the strain modes are considerably more complicated than that of the volume. However, we can expand these dependencies to first order and then apply the observed 150 K strain-mode amplitudes to determine which contributions are most important to the thermal expansion (Table 2). These approximations also help to rationalize the extremely unusual temperature dependencies of the cell parameters in Figure 3. For example, the unit cell angle β initially decreases on cooling through T c but then increases below T ≈ 260 K, whereas angle increases monotonically on cooling. When we look at which modes influence α and β, we see that they vary in opposite directions with respect to either mode s 5 or s 6 of Γ 2 + ; the sensitivity to s 5 is weak and the amplitude of s 5 is large, whereas the sensitivity to s 6 is strong and the amplitude of s 6 is small. As a result, s 5 and s 6 end up making comparable angle changes of magnitudes 0.811 and 0.575°at 150 K. The total contributions of Γ 2 + are then Δα = 0.811 − 0.575 = 0.236°and Δβ = −0.811 + 0.575 = −0.236°. The sizeable amplitudes of modes s 2 and s 4 of Γ 1 + must also be considered, especially due to the high sensitivity of both angles to s 2 ; the total contributions of Γ 1 + strains are Δα = Δβ = 1.019 −0.199 = 0.820°at 150 K, which are ultimately much larger than those of Γ 2 + . The combined multiirrep contributions (Γ 1 + + Γ 2 + ) are then Δα = 0.820 + 0.236 = 1.056°and Δβ = 0.820 − 0.236 = 0.584°at 150 K. However, because the Γ 1 + modes acquire significant amplitudes only well below the 300 K transition (Figure 4c), the Γ 2 + decrease in Δβ with decreasing temperature is well underway, before the much larger Γ 1 + increase in Δβ begins to evolve, resulting in the striking trend reversal observed.
Although less obvious from Figure 3d, similar effects are seen in the cell volume, which shows a marked discontinuity at 260 K and a smaller discontinuity at 300 K. This is more apparent in the plot of volumetric strain relative to that extrapolated from the HT structure in Figure 3f and the thermal expansion plots of (1/a)da/dT in Figure 3e. Unlike the individual cell parameters, the dependence of the cell volume on the strain amplitudes is relatively simple (a third-order multivariable polynomial) Δ = + + + + + ), we see that the volume change is approximately linear in, and hence very sensitive to, strains s 1 , s 3 , s 4 , which tend to operate primarily on the cell-edge lengths. In contrast, the volume change is merely quadratic in, and hence much less sensitive to, strains s 2 , s 5 , s 6 , which tend to operate primarily on the cell angles. As a result, despite the fact that the Γ 2 + strain s 5 dominates the structural (and presumably the energetic) changes, the much smaller Γ 1 + strain s 4 makes the dominant contribution to the volume change.
It remains to explore why the primary Γ 2 + strain modes and secondary Γ 1 + strain modes evolve at such different temperatures. In this context, the word "secondary" implies the mathematical inability to generate the low-symmetry subgroup and hence the physical inability to drive the transition; in this case, the secondary modes do not break any parent symmetries at all since Γ 1 + is also the identity irrep. One possible explanation is the occurrence of a second-phase transition near 260 K, which would necessarily be isosymmetric (space group P1̅ is retained) and first order since its order parameter belongs to the identity irrep. We see no strong evidence to support such a phase transition in our data. In addition, a closer examination of Figure 4c reveals that the secondary Γ 1 + modes exhibit small but nonzero departures from their high-temperature trends between 260 and 300 K. We, therefore, believe that the observed behavior is caused by a coupling between the primary Γ 2 + and secondary Γ 1 + order parameters. In essence, for strains as large as those observed in CA-Pyz, cross-coupling terms in the Landau freeenergy expansion become sufficiently large that they can no Right-hand columns show each principal-axis direction relative to crystallographic axes for the overall strain (i.e., components of Xn directions along a, b, and c, as shown in Figure 4g). Estimated uncertainties on expansion coefficients are ∼30 × 10 −6 K −1 . The values in a given column can be summed to determine the approximate total change in the corresponding cell parameter. Some of the smaller nonzero 150 K values were rounded to zero. For example, the pseudo-linear slope of the Δc(s 2 ) curve is −3.061 Å, so that a value of s 2 = +0.0169 (at 150 K) results in a change of Δc ≈ (−3.061 Å)(0.0169) = −0.052 Å, in addition to Δα = Δβ ≈ (+1.050)(0.0169)(180°/π) = +1.019°. Figure 5. Rotational symmetry-mode refinement shows that the main structural change is a cooperative rotation of all Pyz molecules in the z = 0 plane, which occurs around an axis close to the H···N(Pyz)N···H hydrogen bond direction (animations available in the SI). This is dominated by the r 4 Γ 2 + rotation, with the smaller r 5 Γ 2 + contribution acting to bring the rotation vector in line with the H-bonded chain. The significant shift in Pyz positions driven by the rotation couples with a sliding of adjacent planes of CA molecules, with each layer moving laterally from its high-temperature position by ∼1 Å between 300 and 250 K. This is enabled by the relatively weak van der Waals interactions between CA planes and leads to the significant expansion of the b-axis on cooling. In addition to the dominant Γ 2 + rotations, we see a small r 6 Γ 1 + rotation below 260 K (shown with squares in Figure 4a). This corresponds (Figures 1 and 5) to a rotation of Pyz around an axis perpendicular to the N−N direction and in the plane of the ring. This motion "buckles" the CA-Pyz chain, causing a reduction in the c axis (∼0.2 Å) and a small expansion on cooling in the ab plane (∼0.1 Å increase in γ ab sin( ) as an in-plane measure of the dimension) as the Pyz plane becomes more closely aligned with it. It seems likely that this distortion occurs as a result of the large r 4 rotation and provides the observed coupling between Γ 2 + and Γ 1 + strains that emerges with the decreasing temperature.
■ CONCLUSIONS
Our variable temperature single-crystal and powder diffraction studies have shown that CA-Pyz undergoes a reversible phase transition close to room temperature, which leads to extreme anisotropy in thermal expansion and a remarkable 14% size reduction in one dimension over a 100 K range. The effects of the transition are seen over a ∼150 to 300 K temperature range. Highly anisotropic thermal expansion coefficients up to +1500 and −1500 × 10 −6 K −1 result. These occur over a significant temperature range close to room temperature and without any external driving effects such as spin crossover or composition change. In the HT form, anisotropic atomic displacement parameters derived from neutron diffraction data suggest significant (∼22°) librational disorder of pyrazine molecules around an axis defined by the H···N(Pyz)N···H hydrogen bonds, which freeze out as a ∼16°static rotation on cooling. The transition, which is presumably entropically driven, is enabled by the cooperative sliding of adjacent molecular layers by ∼1 Å and leads to the high thermal anisotropy. It, thus, has similarities to the alkyl-chain disorder reported in a Co complex, 64 which leads to cell parameter changes of +8/−5% at a first-order transition, and to the abrupt uniaxial contraction caused by oxalate reorientation in [Ni II (en) 3 ](ox). 65 It differs from these examples in that the transition is second-order-like and occurs over a broad temperature range. The cooperative nature of the structural change, which involves relatively small displacements of adjacent layers, and the relationship between the parent and child cells is reminiscent of a diffusionless martensitic transition. It is, therefore, likely that a similar change can be stress-induced, potentially leading to phenomena such as superelasticity or shape memory. Extreme cell parameter changes are often associated with martensitic transitions. For example, the transitions in NiTi and the organic superelastic terephthalamide 13 correspond to respective cell parameter changes approaching ±10 and ±7%.
This work also shows that rotational symmetry modes provide a powerful symmetry-derived description of the phase transition in CA-Pyz, allowing us to describe the essence of the structural changes in a small number of parameters, principally a Γ 2 + strain and rotation. The use of strain modes to describe thermal expansion also allows us to understand the unusual temperature dependence of cell parameters in this material in terms of coupling between primary and secondary order parameters. These methods can be used for a wide range of molecular and hybrid materials undergoing phase transitions and can be readily applied using the ISODISTORT and TOPAS approaches, allowing useful information to be determined on such materials even using relatively low-quality powder diffraction data. | 8,556.8 | 2019-05-28T00:00:00.000 | [
"Physics"
] |
Optical glucose sensors based on hexagonally-packed 2.5-dimensional photonic concavities imprinted in phenylboronic acid functionalized hydrogel fi lms
Continuous glucose monitoring aims to achieve accurate control of blood glucose concentration to prevent hypo/hyperglycaemia in diabetic patients. Hydrogel-based systems have emerged as a reusable sensing platform to quantify biomarkers in high-risk patients at clinical and point-of-care settings. The capability to integrate hydrogel-based systems with optical transducers will provide quantitative and colorimetric measurements via spectrophotometric analyses of biomarkers. Here, we created an imprinting method to rapidly produce 2.5D photonic concavities in phenylboronic acid functionalized hydrogel fi lms. Our method exploited di ff raction properties of hexagonally-packed 2.5D photonic microscale concavities having a lattice spacing of 3.3 m m. Illumination of the 2.5D hexagonally-packed structure with a monochromatic light source in transmission mode allowed reversible and quantitative measurements of variation in the glucose concentration based on fi rst order lattice interspace tracking. Reversible covalent phenylboronic acid coupling with cis -diols of glucose molecules expanded the hydrogel matrix by (cid:1) 2% and 34% in the presence of glucose concentrations of 1 mM and 200 mM, respectively. A Donnan osmotic pressure induced volumetric expansion of the hydrogel matrix due to increasing glucose concentrations (1 – 200 mM), resulted in a nanoscale modulation of the lattice interspace, and shifted the di ff raction angle ( (cid:1) 45 (cid:3) to 36 (cid:3) ) as well as the interspacing between the 1 st order di ff raction spots ( (cid:1) 8 to 3 mm). The sensor exhibited a maximum lattice spacing di ff raction shift within a response time of 15 min in a reversible manner. The developed 2.5D photonic sensors may have application in medical point-of-care diagnostics, implantable chips, and wearable continuous glucose monitoring devices.
Introduction
Diabetes is one of the most serious health problems worldwide. 1,2 It is a chronic disease characterized by disorder of glucose metabolism which is reected in the elevated concentration of blood glucose. 3,4 Health complications caused by diabetes include heart disease, kidney failure, blindness and increase in the disability-adjusted life years. [5][6][7] In 2015, the estimated diabetes prevalence was 415 million adults, which is projected to reach 642 million by 2040. 8 This epidemic also poses an enormous economic burden on society; 9 the direct annual cost of diabetes to the world is more than $827 billion. 10 Appropriate medication and glucose concentration control can improve treatment efficacy by mitigating the symptoms and reducing the complications. 5,[11][12][13] For this reason, glucose monitoring is crucial in diabetes management.
Currently, the most common method of monitoring glucose concentration is the nger prick test which is an electrochemical method based on enzymes such as glucose oxidase, glucose dehydrogenase. 14 This procedure is inconvenient for patients, and due is invasive, may lead to infections. Additionally, it does not allow real-time measurements and sensors cannot be reused, due to the irreversibility of reactions. 15 Moreover, the sensitivity of such electrochemical and enzymatic sensors is affected by numerous factors such as interference from the high partial pressure of oxygen, maltose and haematocrit. 14,15 Hence, development of new continuous and noninvasive glucose monitoring system is necessary to overcome problems related to the conventional electrochemical method. 16 It is highly desirable that the new system would provide information about real-time uctuations in blood glucose concentrations, which improves the accuracy of insulin administration in diabetes management. 17 To date, different approaches have been investigated to achieve a complete solution. [18][19][20][21] Optical sensors seem to overcome the limitation of existing sensors since they can provide fast, quantitative, measurements in realtime and in a reversible manner. 16,22 Recent advances in photonics and polymer chemistry have enabled the fabrication of photonic sensors on so hydrogel materials and have led to an increased interest in hydrogelbased optical glucose sensors. 15 Hydrogels are highly waterabsorbing polymers capable of undergoing reversible volume changes. 23 They can be designed to respond to certain stimuli (e.g. temperature, pH, ionic strength, metal ions, antigens, proteins). [24][25][26][27][28][29][30] The selectivity is obtained by functionalizing hydrogels with receptor molecules that are sensitive to a particular stimulus or a molecule. 31,32 One promising approach for glucose detection using hydrogels is the covalent incorporation of boronic acids in a copolymer matrix. [33][34][35][36][37] Boronic acids bind to diol-containing carbohydrate species, such as glucose, through a reversible boronate formation. 38,39 Upon binding of boronic acid copolymer with glucose, the polymer network swells and alters its physical and optical properties, which can be used for glucose quantitative analyses. 31,32 Glucose-responsive hydrogels can be incorporated into photonic devices. The inclusion of the photonic sensor into the hydrogel can help in the development of superior analytical devices. Such photonic devices work through controlling and manipulation of the propagation of light. 40 Over the last two decades, many approaches including laser writing, selfassembly, and layer-by-layer deposition have been demonstrated to create Bragg diffraction gratings, micro-lenses, etalons and plasmonic structures in hydrogels. Although no commercial device has been released yet due to unsatisfactory sensitivity and specicity issues. 10 In this paper, we have proposed a new optical glucose sensor based on a hexagonal diffraction grating imprinted on a exible hydrogel. The fabrication method is quick and cost-effective. The sensor detected the changes (of overall $8 ) in the diffraction angle within 15 min due to the increasing glucose concentrations (1-200 mM), see Fig. 1 for the schematic illustration of the concept. This change could also be detected clearly under an optical microscopethe minimum increase in the thickness of the hydrogel sensor was $2% for the lowest concentration of 1 mM. These 2.5D glucose sensors could be used multiple times as the detection was observed to be reversible as well as repeatable.
Results and discussion
A honeycomb 2.5D structure was mirror-replicated to obtain a polydimethylsiloxane (PDMS) stamp by a micro imprinting process using a honeycomb master grating. 41 The PDMS solution was prepared by mixing the PDMS base Sylgards 184 (Dow Corning) with the provided curing agent in a 10 : 1 (w/w) ratio and stirring the solution for 10 min at 24 C. This solution was placed in low vacuum for 5 min to remove bubbles. The mixture was then poured on the master grating and covered with a glass slide. The sample was cured in an oven for 40 min at 60 C. The curing process solidied the PDMS, giving a mirror-replica of the parent 2.5D microstructure of the master grating for the subsequent fabrication process of the sensor, see Fig. 2(a-d).
The micro-replication process did not damage the original 2.5D grating, such that multiple PDMS stamps could be fabricated from a single master. Subsequently, each individual PDMS stamp could be used multiple times for the preparation of glucose sensors before it starts showing some degrading. Acrylamide, N,N 0 -methylenebisacrylamide, 3-(acrylamido) phenylboronic acid (PBA), dimethyl sulfoxide (DMSO) and 2,2dimethoxy-2-phenylacetophenone (DMPA) were used as core components of our glucose sensitive hydrogel (GSH): acrylamide (78.5 mol%), N,N 0 -methylenebisacrylamide (1.5 mol%) and (PBA) (20 mol%) were mixed together. A solution of 2% (w/ v) DMPA in DMSO was added to the mixture at a ratio 1 : 1 (v/v). Subsequently, this mixture was stirred very well (120 min, at room temperature) in order to ensure good homogeneity. The resulting mixture was poured directly onto the PDMS stamp and covered with a glass slide. The thickness of the sample was controlled by controlling the space gap between glass slides by placing a ne shim of a known thickness. The sample was then moved to an ultraviolet (UV) curing chamber and cured with UV light for 5 min. Then, it was kept in DI water for 5 min and peeled offhydrophobic nature of the surface of the PDMS stamps facilitates an easily peeling off process. Mirrorreplication of the 3D structure copied from the PDMS stamp onto the GSH results in copying of the original structure of the hexagonal 2.5D master grating, see Fig. 2(h). All samples were hydrated overnight in deionized (DI) water before further use.
The surface of 2.5D grating, PDMS and GSH were imaged by a scanning electron microscope (SEM) (JCM-6000PLUS Neo-Scope Benchtop). Before imaging, samples were coated with a gold layer (5-10 nm) using Agar sputter coater, to avoid charging effectsspecimens being highly dielectric in nature result in charge accumulation and subsequently poor resolution. SEM images show that the hexagonal structure of the 2.5D mimics the true honeycomb architecture, such that the pits with certain depth covered with elevated walls around them form hexagonal cells, with an average cell constant of $3.0 AE 0.3 mm and depth (/height) of $1.2 mm. The mirror replication of this structure on PDMS is a conjugate t, i.e. domes replace the pits in the mirror-replication process, and the walls in the original structure are now the deeper parts of the replica. The GHS copied from the PDMS stamp again results in the original 2.5D honeycomb (hexagonal) structure, see Fig. 3(a-c). All three specimens exhibit perfect surface morphology with almost no defects suggesting a perfect copying from the 2.5D grating to the PDMS stamp and subsequently from the stamp to the GHS sensor.
The volumetric change of the GSH in the presence of glucose is also one way of measuring the glucose content in solutions, optical microscopy (Axio scope A1, Zeiss) was performed in order to determine the thickness of the pristine sample and in different conditions (aer exposing to different glucose concentrations, discussed latter). We obtained the cross-section thickness of $221 mm for a dry pristine GSH sensor.
Angle-resolved far-eld diffraction experiment were carried out using original 2.5D grating, PDMS and GSH samples, see Fig. 3(d) for schematic illustration of the experiment. The sample was carefully placed in a transparent plastic cuvette, mounted on a motorized precision rotation stage and aligned normal to the incident laser beam. The intensity of each diffracted beam was measured using an image-screen place at a distance of 45 cm away from the sample, as well as, by using an optical power meter (Newport, 1918-R) traversable on a circular rail (CR) of radius of 13 cm with sample mount on its centre (the radius of CR also denes the measurement distance between the sample and the power meter). Three laser sources, red, green and blue (640, 532 and 491 nm) (Newport) were used in diffraction experiments. Measurements were recorded in dry (pristine) and soaked conditions (in PBS solution). In order to perform glucose sensing, the cuvette was lled with different solutions and the whole sample was submerged before taking the measurement. The forward-scattered spectra were collected in all cases either by rotating the cuvette or the detector itself by an increment of 1 , from 0 to 180 , relative to the sample normal. For the reference, the intensity/power of the incident light (blank) was also recorded and percentage of the diffracted intensity for each diffraction spot was calculated. A simple method was adopted to record the glucose induced shi in the spectra: the change in the displacement between two opposite 1st order points such that the displacement line should pass through the centre (0-order) of the diffraction pattern was recorded as a function of glucose concentration. Photographs of the diffracted spectra taken on an imaging screen were also analysed with ImageJ soware, and diffraction efficiency (intensity) was plotted against 1 st -1 st order interspace and diffraction angle.
Photographs of the master grating, PDMS and GSH took in white light revealed their diffractive properties as colors present in the incident light were resolved over space, see Fig. 3(e-i) for photographs of all three samples along with computationally calculated Fourier transforms (FT) of their microscopic images revealing their hexagonal architecture. Fig. 3(k-m) shows an example of experimentally obtained diffraction from the PDMS stamp, whereas a reverse FT can be exploited to redraw the physical structure where the light originally diffracts from. We plotted angle-resolved diffracted intensities normalize (to 1) for up to 3 rd diffraction orders as the function the diffraction angle for the original 2.5D grating and PDMS stamp in Fig. 3(o-t). The 0-order peak was the strongest in both cases suggesting that most of the light was transmitted straight to the 0 order without being diffracted: blue, green and red illumination resulted in 0order intensities of 29, 35 and 40% for 2.5D grating, and 16, 28 and 41% for the PDMS stamp, respectively. Intensities of increasing orders (1 st , 2 nd etc.) decreased with the increasing order number. Consistent with 0-order, a slight difference in diffracted intensities (e.g. for the 1 st order) was also observed between both samples: blue, green and red illumination resulted in 1-order intensities of 3.2, 2.9 and 2.7% for 2.5D grating, and 7.7, 6.8 and 6.2% for the PDMS stamp, respectively. Notice that the light distribution in diffraction depends on the incident wavelength. For shorter wavelengths, lesser transmission to the 0-order meant a stronger diffraction, such that the light was distributed more among the subsequent orders, whereas for longer wavelengths, more light was transmitted to the 0order without being diffracted. Angle-resolved measurements conrmed that the diffraction angles for original 2.5D grating and PDMS replica were identical. The diffraction angles between normal and 1 st -order peaks for different lasers, blue, red and green were 10 , 13 and 16 , respectively, consistent with the Bragg's law.
Angle-resolved diffraction measurements were carried for the GSH in its dry and wet conditions, Fig. 4. This was done before carrying out the glucose sensing experiment as hydrophobic nature of sensing material's resulted in an initial swelling that needed be taken into account beforehand in order to perform an error-free measurement. When the sample was soaked in PBS, it absorbed the liquid and swelled in all 3 dimensions. During the analysis, two main observations were made in the behavior of diffraction patterns: rstly, the intensity (efficiency) of the transmitted light dropped signicantly when the sample was wet. For dry (wet) condition, the efficiency of 0-order spot was 64 (32), 63 (28) and 58% (34%) for blue, green and red lasers, respectively. The decrease in efficiency in the wet condition can be explained by Beer-Lambert law, which states that increasing the thickness of the material in which light is traveling, decreases the light transmission. As soon as the sample underwent the initial swelling as the result of absorbed PBS solution, more light was absorbed by the swollen material. Secondly, the diffraction angle of the transmitted laser light decreased when the sample was in its wet condition. Diffraction angles between 1 st -order spots generated by the dry (wet) sample were $10 (8 ), 14 (11 ) and 16 (12 ) for blue, green and red lasers. By the same token, the distance between 1 st -order diffraction points projected on the image screen also decreased. The reason for such negative shi of the diffraction angle is the increase in the gap size (groove constant) of the micro-grating imprinted on the hydrogel. Absorption of PBS by the hydrogel sample resulted in its three-dimensional expansion, thereby, expanded the surface and the features present on the surface. According to the Braggs equation, nl ¼ 2d sin q, where, n is the diffraction order, d is the groove constant, and q is the diffraction angle, the observed shi in the diffraction pattern can be explained. Therefore, volumetric change of the hydrogel material was detected by analyzing the changes in the diffraction pattern generated by the light transmitted through the grating imprinted on the sample. Also, it was established that the resolution of the sensor strongly depended on the wavelength of the laser light that illuminated the sample. Red laser resulted in better resolvable measurements as compared to the shorter wavelengths.
For glucose sensing, angle-resolved diffraction measurements were carried out in far-eld by normally illuminating the GSH grating sensor with a green laser and recording the diffraction pattern on an imaging screen located at a distance of 45 cm away from the sensor, see Fig. 5(a) for the snapshots of the 1 st order interspace taken for increasing glucose concentration. Increasing glucose concentration can be appreciated by noticing a negative shi in the diffraction angle/1 st -order interspace resulted by increasing groove constant of the illuminated GSH structure. Such observation is reversible, that is, the diffraction angle increased or decreased due to the shrinking or swelling of the grating upon exposing the same sensor to low or high glucose concentrations, respectively. Fig. 5(b) shows the diffraction efficiency versus diffraction angle (between 0 and 1 st -order) aer the sensor was soaked in different solution of different glucose concentrations for 1 h. When the sample was soaked in PBS (without glucose) the diffraction angle was $28 . Subsequently, aer removing the PBS solution, different glucose solutions were added one by one to examine their effect on the diffraction patternthe diffraction angle decreased due to the increasing sensor size with a maximum change of $8 for 200 mM glucose solutions. In this experiment, the lowest concentration that could be detected accurately was $10 mM, for which the change in the 1 storder interspace was $3 mm (diffraction angle z 0.3 ), compared with the PBS-soaked condition. However, this value of sensitivity could be improved considerably by rening various experimental parameters, such as the laser spot size, distance between the GSH and the imaging plate and, using a more precise rotation stage.
Response time is a parameter that determines how fast does the sensor work. It is important because a quick real-time capture of the change in sugar level leads to a better treatment/management. Fig. 5(d) represents the change in the angle over time for the 100 mM glucose concentration solution. Within rst 10 min, a rapid change was observed, that moved towards the saturation at $15 min, the change aer 15 min was negligible. It is important to note that its not only the interspace that could be translated to different concentrations, the time slope for different glucose concentration is also different. Therefore, the change in glucose concentration can also be measured well before 15 min by measuring the slope of the interspace-time curve. Other studies suggest that the sensors may take over 1 h to respond. 42 In this work, we have demonstrated much faster response time as compared with previous studies. Further improvement in the response time can be achieved by using a thin GSH grating or/and a more responsive phenylboronic acid (however, this is the subject of a separate report). A thin slice was cut off the GSH sample and placed under the microscope to measure its thickness and its direct response to different glucose concentrations. The slice was placed vertically between two small glass slides and adjusted on a transparent Petri dish. Then, the buffer solution of 7.4 pH was poured into the dish in order to measure the initial increase in the thickness, that is, in the presence of the buffer reference. The initial thickness of the sample without glucose was $305 mm. Subsequently, the sample was soaked in different glucose solutions. With increasing glucose concentration, thickness increased, see Fig. 6(a). The lowest detectable glucose concentration was 1 mM. For this concentration, the thickness increased by $7 mm, which is $2% of the initial thickness. At 200 mM, the thickness increased by $34%. A linear correlation was found between the cross-section thickness and glucose concentration at low concentrations, that is, within the range between 1 to 10 mM, see Fig. 6(b-d). Notice that the said range is actually the physiological range and could be useful in sensing application. Extension of this work to measure the blood or urine glucose concentration are the subject our next report. It suggests that the swelling process is uniform in all 3 dimensions: from microscopic images, comparing the change in the thickness in z-axis with the change in the x-y plane extracted from the diffraction angle measurements, a linear correlation between the change in thickness and the change in the diffraction angle is obtained, see Fig. 6(e).
Although the thickness measurements gave a better resolution as compared with the optical measurements due to our experimental limitations below 10 mM, there is considerable room to rene the diffraction experiments for much higher resolution. The difficulties of detecting the change in the diffraction angle for low concentration can be overcome by using small feature size of the diffraction grating, using longer laser wavelength and decreasing the spot size. More responsive (larger swelling coefficient) phenylboronic acids for larger glucose modulated changes in the imprinted patterns can also be used for enhanced sensitivity and improved selectively, such as 2-(acrylamido)phenylboronate, bisboronic acid, and 4-vinylphenylboronic acid. 33,[43][44][45] Increasing the surface area by making the nanoporous structures and introducing a gating membrane have also been proven to increase the analyte diffusion and rate of complexation. 46 Borrowing the similar techniques from previous studies can also help improving the performance of our proposed glucose sensor. Table 1 highlights some of the recent strategies employed to monitor glucose concentration and their challenges as compared to the standard electrochemical method. Optical detection of glucose can lead to an alternative way to the non-invasive, continuous glucose monitoring in a point-of-care setting for diabetic and non-diabetic patients in near future.
Conclusion
We have demonstrated a new glucose sensor based on a physically patterned glucose responsive hydrogel. The hydrogel was based on poly-acrylamide, N,N 0 -methylenebisacrylamide polymerized with a phenylboronic acid, 3-(acrylamido)phenylboronic acid. The patterning was carried out by microimprinting of a hexagonal structure from PDMS mirror-replica of a 2.5D honeycomb grating. Sensing was done by carrying out optical diffraction measurements from the patterned hydrogel surface in the presence of different glucose concentration. Glucose binding with phenylboronic acid resulted in physical swelling of the hydrogel, which led to the expansion of the sensor's surface imprinted with micro-patterns. This change in the Bragg diffraction was measured in a far-eld transmission conguration. A clear modulation of the 1 st -order interspace against varying glucose concentration was recorded. Direct observation of glucose-induced swelling of the hydrogel was carried under an optical microscope. A linear relationship between the surface and volume expansions was established. A minimum glucose concentration of 1 mM was successfully recorded suggesting the sensor's usability in physiological conditions. We demonstrated that the fabrication of such sensors is quick and cost-effective as compared to its conventional counterparts, and it is suitable for the mass production.
Methods
The boronic acid-diol interaction is highly pH-dependent. 43 For this reason, all measurements were conducted in phosphatebuffered saline (PBS) at a constant pH 7.4. Stock solutions of phosphate-buffered saline were prepared from PBS tablets (ThermoFisher Scientic). A high concentration glucose solution (200 mM) was prepared by dissolving D-glucose (dextrose anhydrous, Science Lab) in the PBS solution. The buffer solution containing glucose was serially diluted with the PBS to prepare various glucose concentrations in the range from 1 to 200 mM. A fresh solution was prepared for each trial and used immediately aer their preparation.
Conflicts of interest
There are no conicts to declare. | 5,332.6 | 2017-11-21T00:00:00.000 | [
"Engineering"
] |
Analysis of Two-Worm Interaction Model in Heterogeneous M 2 M Network
With the rapid development of M2M (Machine-to-Machine) networks, the damages caused by malicious worms are getting more and more serious. By considering the influences of the network heterogeneity on worm spreading, we are the first to study the complex interaction dynamics between benign worms and malicious worms in heterogeneous M2M network. We analyze and compare three worm propagation models based on different immunization schemes. By investigating the local stability of the worm-free equilibrium, we obtain the basic reproduction number 0 R . Besides, by using suitable Lyapunov functions, we prove that the worm-free equilibrium is globally asymptotically stable if 0 1 R , otherwise unstable. The dynamics of worm models is completely determined by 0 R . In the absence of birth, death and users’ treatment, we obtain the final size formula of worms. This study shows that the nodes with higher node degree are more susceptible to be infected than those with lower node degree. In addition, the effects of various immunization schemes are studied. Numerical simulations verify our theoretical results. The research results are meaningful for us to further understand the spread of worms in heterogeneous M2M network, and enact effectual control tactics.
Introduction
With the development of cloud computing and M2M technologies, the threat from worms and their variants is becoming increasingly serious.According to the 2015 Symantec Global Internet Security Threat Report [1], the year 2014 is a year with far-reaching vulnerabilities, faster attacks, files held for ransom, and far more malicious code than in previous years.Many measures are taken against the propagation of malicious worms, such as anti-virus software, patch, firewall, etc.However, those measures cannot understand the transmission mechanisms of worms, and cannot predict the spread trends of worms [2].Thus, those general defense and detection measures cannot help us to provide more effective measures against malicious worms and their variants.
Mathematical modeling is an important tool in analyzing and controlling the spread of malicious worms.The process of model formulation comprehensively uses assumptions, parameters, and variables.Besides, those models provide us theoretical conclusions, such as basic reproduction rate, feasible region, effective contact rate, and thresholds.Theoretical analysis and numerical simulations are useful tools for testing theoretical results, determining how to change sensitive parameter values, getting key parameters according to known date, and establishing effective control measures.Mathematical models effectively help us understand the mechanisms of worm propagation, predict the spread trends of worm, and develop better approaches for decreasing the transmission of malicious worms.
According to network characteristics, networks can be divided into two kinds: homogeneous and heterogeneous networks.In a homogeneous network, most nodes have similar degrees, and nodes' degree distribution is very similar; homogeneous networks include regular networks [3], Erdӧs-Renyi random networks [4], and small-world networks [5].Many epidemic models [6][7][8][9][10][11][12][13][14][15] have been developed to understand the spreading dynamics based on the fully-connected assumption of the homogeneous network.In those models, whole nodes are divided into different compartments corresponding to different epidemiological states.Nevertheless, this fully-connected assumption is inconsistent with the real world network topology; that is, the assumption that each computer has an equal probability of scanning any other individual in the network is unreasonable.Thus, the homogeneous mixing hypothesis is an overly simplified assumption, and is generally unrealistic.It is not appropriate for modeling epidemic models in heterogeneous networks.Contrary to homogeneous networks, most nodes of heterogeneous network have large fluctuations in degree distribution; heterogeneous networks include scale-free networks [16], broad-scale networks [5] and M2M wireless networks.Several models [17][18][19][20][21][22][23][24] have been developed to model such complex patterns of interactions in complex network.Of course, the spreading mechanism and dynamics of worm models in different kind networks are different.The M2M (Machine-to-Machine) network is a network that is based on the intelligent interaction among smart devices, and the blending of several heterogeneous networks, such as WAN (Wide Area Network), LAN (Local Area Network) and PAN (Personal Area Network).This decade, M2M communications over wired and wireless links continue to grow, and as such, various applications of M2M have already started to emerge in various sectors such as vehicular, healthcare, smart home technologies, and so on [25].In recent years, the damages caused by malicious worms and their variants in M2M wireless networks are becoming increasingly serious, due to the variety of network forms, the openness of information, the mobility of communication applications, the security vulnerability of operating systems, the complexity of network nodes, and so on.Thus, in order to control the large-scale propagation of malicious worms in M2M network, we must consider the heterogeneity of the M2M network into the modeling process.
Although there are approaches to contain the spread of worms, such as antivirus software, intrusion detection system and network firewalls, these approaches can only give an early warning signal about the presence of a worm so that defensive measures can be taken.Currently, worm-anti-worm strategy is one of the most effective ways to restrain the propagation of malicious worms.Benign worms are beneficial worms that can dynamically proactive defend against malicious worm propagation and patch for the susceptible hosts.Even though users lack cybersecurity awareness or make poor security measures, benign worms can also maintain network security.That is why, in this paper, we first consider using benign worms to counter the malicious worms in heterogeneous M2M network.
Many epidemic models [26][27][28][29][30] have studied the spreading dynamics between benign worms and malicious worms in homogeneous networks in recent years, and proved the effectiveness of the worm-anti-worm strategy on decreasing the transmission of malicious worms.However, to our knowledge, there are no epidemic models to consider the worm-anti-worm strategy in a heterogeneous network.Motivated by this, in this paper, we first propose a novel dynamical model to study the dynamics of interactive infection between malicious worms and benign worms in heterogeneous M2M networks.Furthermore, we compare our model to two other worm propagation models that are based on different immunization schemes in heterogeneous M2M networks.Through theory analysis, we find that the dynamics of those models are completely determined by a threshold value, which is the model's basic reproductive number.Besides, in the absence of birth, death and users' treatment, we obtain the final size formula of malicious worms based on comprehensive immunization scheme.Numerical simulations support our results.We believe that the results can help restrain the spread of malicious worms.
The rest of the paper is organized as follows.Section 2 describes some related works of worm propagation models.We formulate the models in Section 3. In Section 4, we obtain the basic reproductive number and prove the local and global stability of the worm-free equilibrium.Section 5 determines the final size formula.In Section 6, a series of numerical experiments are given to verify the theoretical results.Finally, conclusions are given in Section 7.
The homogeneous worm propagation models are based on the concept of a network fully-connected graph, which ignore the real-world network topology.For instance, the classical simple epidemic model [6], the Kermack-McKendrick (KM) epidemic model [7], and the two-factor worm model [8] are all homogeneous models, which model the propagation of random scanning malicious worms.This class of models assumes that each host has an equal probability in connecting to any host in the network.Especially, in [8], Zou et al. extended the KM model [7] and analyzed the propagation of Red Code by considering the dynamic countermeasures taken by ISPs (i.e., Internet Service Providers) and users, and the influence of network congestion on worm infection rate.Later, many extended models [9][10][11][12][13][14][15] are proposed, but they all belong to homogeneous models.
As more and more malicious worms plunge into the Internet, traditional counterwork technologies cannot currently scale to deal with the worm threats, and as such worm-anti-worm (WAW) has become a new active countermeasure.The idea of an anti-worm is to transform a malicious worm into a benign worm, which spreads itself using the same mechanism as the original worm.A benign worm can proactively patch and harden configuration setting, and wipe off malicious worms that have infected hosts.There are many worm-anti-worm epidemic models [26][27][28][29], for instance, based on the two-factor worm model, Zhou et al. [26] modeled each sub type of active benign worms and hybrid worms under the circumstance of no time delay and time delay.Ma et al. [27] explored the influences of removable devices on the interaction dynamics between malicious worms and benign worms, effective methods are proposed to contain the propagation of malicious worms using anti-worms.
The topological worm propagation models describe the worms as relying on the topology for spreading themselves.To the best of our knowledge, Fan et al. [30] first proposed a novel logic matrix approach to modeling the propagation processes of P2P (i.e., Peer to Peer) worms by difference equations, which are essentially discrete-time deterministic propagation models of P2P worms.It is suitable for modeling P2P worms because this model considers the topology of a P2P network.Wang et al. [31] created a microcosmic landscape on worm propagation and successfully analyzed the procedures of worm propagation.Furthermore, some relevant work has been reported in [32][33][34][35].
Faloutsos et al. [36] found that the autonomous Internet topology asymptotically follows a powerlaw degree distribution.Stimulated by this theory, several heterogeneous models [17][18][19][20][21][22][23][24] have studied the spreading behavior of worms on complex networks.In a complex network, each node represents a host in its corresponding epidemiological state, and each edge between two nodes stands for an interaction that may allow worm transmission.Those studies indicate that the connectivity fluctuations of the network strongly enhance the incidence of infection in complex network.
The Limitation of Existing Worm Propagation Models
Although all the previous models have qualitatively understood the propagation mechanisms of worms and study the corresponding control strategies by mathematical modeling, the common shortcoming of the existing models is that they do not research the influences of network heterogeneity on the interaction dynamics between malicious worms and benign worms in a heterogeneous M2M network.That is, those homogeneous models ignore the network heterogeneity, and those heterogeneous models ignore the influences of network heterogeneity on the interaction of two worms.
Our Proposed Worm Propagation Model
In our model, we firstly propose a novel dynamic model to study the interaction dynamics of the worm-anti-worm strategy in heterogeneous M2M network.We then compare our model to two other worm propagation models, which are based on different immunization schemes.Through theory analysis, we find that the dynamics of those models are completely determined by a threshold value.
By choosing suitable Lyapunov functions, the global asymptotical stability of worm-free equilibrium is proved.Besides, in the absence of birth, death and users' treatment, we obtain the final size formula of malicious worms based on comprehensive immunization scheme.After that, the effects of various immunization schemes are compared.Finally, numerical simulations verify our results.
The Model
In order to study the influences of the M2M network heterogeneity on worm spreading, in our models, a group ( , ) G V E is used to represent the topological structure of the M2M network, where V is the set of all nodes and E is the set of all edges.In this group, a node corresponds to a computer, and an edge represents the potential communication between two nodes.The number of edges emanating from a node is called as the degree of a node.We assume that the total computers can be broken into a number of homogeneous sub-compartments, i.e., all nodes in the same sub-compartment are dynamically equivalent, without any difference.We classify the nodes into compartments based on the number of node degree, i.e., nodes in the same sub-compartment have the same number of node degrees.Our model is based on a system of ordinary differential equations describing the evolution of the number of individuals in each compartment.Based on the above assumptions, we classify the total nodes into different groups, where k N k is the total number of nodes that have k -degree, and is the maximum node degree of total nodes.( is the nodes size of the whole network.In order to reflect the heterogeneous nature of an M2M network, we consider nodes degree distribution ( ) p k , i.e., ( ) / k p k N N , which is the probability that a node degree chosen randomly is k -degree.
According to our different purposes, the infection rates of nodes can be classified into the following compartments: susceptible nodes ( ) S , malicious infectious nodes ( ) I , benign infectious nodes ( ) B and recovered nodes ( ) V .The corresponding notations are as following: ( ) ( ) ( ) ( ) , the relative densities of S -nodes, I -nodes, B -nodes, and V -nodes with k -degree at time t .Some frequently used notations of this paper are listed in Table 1.
Table 1.Notations of this paper.
The maximum node degree of total nodes ( ) The relative density of S -nodes with k -degree at time t ( ) The relative density of I -nodes with k -degree at time t ( ) The relative density of B -nodes with k -degree at time t ( ) The relative density of V -nodes with k -degree at time t k N The total number of nodes with k -degree N The nodes size of the whole network
The recovery rate of S -nodes become V -nodes due to the treatment effect 2
The recovery rate of I -nodes become V -nodes due to the treatment effect The probability that a randomly chosen link points to a I -node 2 ( ) t The probability that a randomly chosen link points to a B -node ( ) p k The probability that a node chosen randomly with k -degree In Figure 1, represents the probability that a randomly chosen link points to a malicious infectious node, i.e. the relative density of I -nodes at time t . is the birth or death rate of a node; thus, ( ) Due to the infection of the malicious nodes, S -nodes becomes I -nodes at constant rate 1 .
Thus, 1 1 k kS is the number of S -nodes turn into I -nodes at time t . 1 and 2 are, respectively, the recovery rate of S -nodes and I -nodes due to the treatment effect of users.Based on Figure 1, we can get the dynamical differential equations for each compartment with k -degree as following: The initial condition of System ( 1) is (0 , and its feasible region is , which is a positively invariant.
The Model of k k k k S I B V
In the k k k k S I B V model, we divide nodes into four compartments: S -nodes, I -nodes, B-nodes and V -nodes.The transfer diagram of each compartment with k -degree ( 1, 2,..., ) k is schematically depicted in Figure 2. Besides, the other model parameters are the same as defined in Figure 1.Based on Figure 2, the dynamical differential equations for each compartment with k -degree is donated as: The initial condition of System ( 2) is (0) (0) (0), (0) 0 , and its feasible region is , which is a positively invariant.
The k k k S I V Model Based on Targeted Immunization Strategy
The endemic equilibrium of System (1) is * . This indicates that a node with a higher node degree is more susceptible to be infected by worms than that with a lower node degree.In addition, we can conclude that the higher the degree of nodes, the higher the infection density of a network, and thus the density would tend to a constant when the node degree goes to infinity.Indeed, this precisely coincides with reality.Thus, we can use targeted immunization strategy to strengthen protection efforts of the nodes that have a higher node degree [37].First, we introduce lower and upper thresholds 1 K and 2 K , such that if 2 k K , all nodes with k -degree will be immunized, while if ) a a of nodes with k -degree will be immunized.Thus, we can define the immunization rate k as following: is the average immunization rate of the targeted immunization strategy.Then, the ordinary differential equations of System (1) will become The feasible region of System (3) is same with System (1).
The Global Stability of k k k S I V Model
By counting, we can easily obtain the equilibrium of System (1).The worm-free equilibrium is ( , , , , , , ) , where According to the computing method in [38], only the infected compartment k I is involved in the calculation of the basic reproductive number 0 R .At the worm-free equilibrium 0 E , the transmission matrix F is the rate of appearance of new infections and the transmission matrix W is the transfer rate of nodes among compartments; they are, respectively, denoted by 0 0 , , 0 0 0 0 . Therefore, according to Theorems in [38,39], the basic reproduction number R0 is defined as the spectral radius of the next generation matrix Based on 01 R , we have the following theorem: Theorem 1.When 01 1 R , the worm-free equilibrium * E of System ( 1) is locally and globally asymptotically stable in the model's feasible region 1 U , and unstable when 01 1 R .
Proof.To measure the global asymptotical stability of worm-free equilibrium, we choose a Lyapunov function like this: . Hence, the time derivative of 1 ( ) L t along the solutions of System ( 1) is formulated as following: We can know that ' 1 ( ) 0 L t only if 01 1 R .Therefore, according to LaSalle's invariance principle [40], when 01 1 R , the worm-free equilibrium 0 E of System (1) is locally and globally asymptotically stable in the model's feasible region 1 U .When 01 1 R , it means that ' 1 ( ) 0 L t , and 0 E is unstable in 1 U .Thus, Theorem 1 is proved.□
The Global Stability of k k k k S I B V Model
The worm-free equilibrium of System ( 2) is and the endemic equilibrium is , , , , , , , ) , where , .
Theorems in [38,39], the basic reproduction number R0 is defined as the spectral radius of the next generation matrix Based on 02 R , we have the following theorems: Theorem 2. When 02 1 R , the worm-free equilibrium * E of System ( 2) is locally and globally asymptotically stable in the model's feasible region 2 U , and unstable when 02 1 R .
Proof.To measure the global asymptotical stability of worm-free equilibrium, we choose a Lyapunov function like this: 2 ( ) ( ) ( ) . Hence, the time derivative of 2 ( ) L t along the solutions of System ( 2) is formulated as: Therefore, according to LaSalle's invariance principle [40], when 02 1 R , the worm-free equilibrium 0 E of System ( 2) is locally and globally asymptotically stable in the model's feasible region 2 U .Otherwise, 0 E is unstable in 2 U .Thus, Theorem 2 is proved.□
The Basic Reproductive Number of System (3)
We have known that the initial condition, and the worm-free equilibrium of System (3) are the same as System (1).Besides, the endemic equilibrium of System ( 3 ( , , , , , , ) , where By counting, we can also obtain that the basic reproduction number of System (3) is Thus, when choosing an appropriately small value of 2 K , we can ensure 0 , here,
The Final Size Formula if
The expected final size of worms is defined as the total number of nodes affected by the worm at the end of the epidemic, which is an important indicator of worm outbreak.In this section, by using the same computing method in [38], we will firstly investigate the final size formula of System (2).Secondly, we will get the final size formula of worms based on the comprehensive immunization scheme.
The Final Size Formula of System (2)
First, we show that the worms of System (2) will eventually die out, i.e., ( ( ) 0, ) 1 .0, we know that all solutions of System (2) are non-negative and bounded.
In the absence of birth, death and the treatment effect of users, We see that ( ) ( ) S t I t is bounded below by zero, it has a limit as t .Because ( ) k S t is decreasing and also bounded by zero, it also has a limit as t .Therefore, ( ) k I t will tend to zero; that is, ( ) Evidenced by the same token, we know that ( ) If we integrate Equation ( 6) from 0 to , we can obtain Due to From Equation ( 7), we get Through similar calculation, we obtain (0) From Equations ( 9) and (10), we can calculate that where Integrating System (2) from 0 to t , we get Consider three cases: We know the final size formula is ( ) ( ) 1 ( ) , which is related to the network structure.When other parameters do not change, the degree distribution of the network greatly affects the final size of worms; meanwhile, the higher initial value or more effective infection ability of malicious worms, the larger-scale of malicious worms will be.
The Final Size Formula Based on Comprehensive Immunization Scheme
Since the compiler program of benign worm needs time, we should use the targeted immunization scheme to protect the security of a network before benign worms are put into a network.When benign worm has been compiled, we will only use benign worm scheme to repair a network.That is, from 0 to T , we will only use targeted immunization scheme, and only use benign worm scheme from T to ( ) t t T .Then, we have Letting t , and 0 0 0 (0) , (0) , (0) , which is also related to the network structure.
Simulations
In the real world, the degree distribution of network usually obeys a power-law distribution.We choose 3a exhibits the time plot of System (1).It shows that the malicious worms will gradually disappear.When 1 0.1
and the other parameters remain the same, 01 2.8843 1 R .Figure 3b shows that the malicious worms are prevalent in the network and all states reach their equilibrium points.This example proves Theorem 1. Furthermore, considering System (1) with the same parameters as Figure 3b, Figure 3c exhibits the time plot of different node degrees of malicious infectious nodes.It shows that a higher node degree host is more susceptible to be infected than a lower node degree host., Figure 4 shows that 03 R is a function of a and 2 K .It illustrates that 03 R is a decreasing function of a but an increasing function of 2 K , which means, if a is large or 2 K is small, more nodes will be immune.), and benign worms immunization (with the average infection rate of benign worm is ).On purpose, we set the average immunization rates for targeted immunization and benign worms immunization, which are, respectively, 0.087, and then we can make comparison analyses.Figure 6 indicates that, compared to targeted immunization, benign worms immunization has the absolute predominance to control the spread of malicious worms, even though the average infection rate of benign worms is small.
Conclusions
To our knowledge, we are the first to study the complex interactions between benign worms and malicious worms in heterogeneous M2M network.We, respectively, obtain the equilibriums and basic reproduction number of three different models.Meanwhile, we prove that the global dynamics are determined by the threshold value 0 R .In the absence of birth, death and the treatment effect of users, we obtain the final size formula in different immunization schemes.Our results show that the nodes with higher node degrees are more susceptible to infection than those with lower node degrees.Furthermore, the effects of various immunization schemes are studied.Numerical simulations verify our results.This paper provides a strong theoretical basis to take effective measures to control the large-scale propagation of malicious worms in heterogeneous M2M network.In the future, we are going to incorporate time delay of susceptible and/or infectious computers (malicious infected or benign infected) into the proposed worm propagation models, which will greatly enhance the practicability of our models.
1 2
The effective infection rate of the malicious wormsThe effective infection rate of the benign worms 3The effective repair rate of the benign worms wipe off the malicious worms 1
3. 1 .
The Model of k k k S I V The k k k S I V model only contains three compartments: S -nodes, I -nodes, and V -nodes.Figure 1 is the transfer diagram of each compartment with k -degree ( 1, 2,..., ) k .
Figure 1 .
Figure 1.The transfer diagram of k k k S I V model.
Figure 2 . 2
Figure 2. The transfer diagram of k k k k S I B V model.
Figure 4 .
Figure 4. 03 R is a function of a and 2 K .
Figure 6 .
Figure 6.The average densities of malicious infected nodes for different immunization tactics.
Table 1 .
Cont.The self-destruction rate of the benign worms in the B -nodes | 5,950 | 2015-10-10T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Butterfly Distributions of Energetic Electrons Driven by Ducted and Nonducted Chorus Waves
Bursts of electron butterfly distributions at 10s keV correlated with chorus waves are frequently observed in the Earth's magnetosphere. Strictly ducted (parallel) upper‐band chorus waves are proposed to cause them by nonlinear cyclotron trapping. However, chorus waves in these events are probably nonducted or not strictly ducted. In this study, test‐particle simulations are conducted to investigate electron scattering driven by ducted (quasi‐parallel) and nonducted upper‐band chorus waves. Simulation results show butterfly distributions of 10s keV electrons can be created by both ducted and nonducted upper‐band chorus waves in seconds. Ducted upper‐band chorus waves cause these butterfly distributions mainly by accelerating electrons due to cyclotron phase trapping. However, nonducted waves tend to decelerate electrons to form these butterfly distributions via cyclotron phase bunching. Our study provides new insights into the formation mechanisms of electron butterfly distributions and demonstrates the importance of nonlinear interactions in the Earth's magnetosphere.
Drift shell splitting and magnetopause shadowing are theoretically energy independent, which can cause the butterfly PADs of tens of keV to several MeV electrons at larger L-shells (e.g., Fritz et al., 2003;Sibeck et al., 1987).Wave-particle interactions that are energy dependent are also an important contributor to the formation of electron butterfly PADs especially at lower L-shells (e.g., Horne et al., 2007;J. Li et al., 2016;Xiao et al., 2015).Many previous studies have shown that the butterfly PADs of hundreds of keV to several MeV electrons are usually induced by magnetosonic (MS) waves, plasmaspheric hiss, and chorus waves (e.g., Albert et al., 2016;Hua et al., 2019;J. Li et al., 2014J. Li et al., , 2016;;Ma et al., 2015;Ni et al., 2017Ni et al., , 2018Ni et al., , 2020;;Xiao et al., 2015).In recent years, the bursts (within ∼30 s) of electron butterfly PADs at tens of keV, with high correlations with chorus waves, are frequently observed in the Earth's magnetosphere (Fennell et al., 2014;Kurita et al., 2018;Peng et al., 2022).Observational statistics reveal that more than 80% of these events are upper-band chorus dominated events and these events tend to occur over ∼21-05 MLT at L = ∼4.5-6.5 (Peng et al., 2022).By assuming these chorus waves are strictly ducted (parallel), previous test-particle simulations suggest nonlinear cyclotron trapping is mainly responsible for the bursts of these butterfly PADs (Gan, Li, Ma, Artemyev, & Albert, 2020;Saito et al., 2021).However, chorus waves in these events are probably nonducted or not strictly ducted, since many of these events are observed without a density structure (see Figure S1).Chorus waves can be ducted (quasi-parallel) in a density structure and propagate nearly along the magnetic field lines (e.g., R. Chen et al., 2021;Ke et al., 2021;Liu et al., 2021;Streltsov et al., 2006).However, nonducted chorus waves tend to become obliquepropagating gradually (e.g., Breuillard et al., 2012;Gao, Lu, et al., 2022;Lu et al., 2019) after leaving their equatorial sources (e.g., LeDocq et al., 1998;W. Li et al., 2009;Santolík et al., 2005).The relations between nonducted chorus waves and electron butterfly PADs at tens of keV is less well-known.In this study, test-particle simulations combined with electron magnetohydrodynamics (EMHD) simulations have been used to investigate electron scattering driven by ducted and nonducted upper-band chorus waves.Simulation results show that ducted and nonducted upper-band chorus waves lead to the butterfly PADs of tens of keV electrons through different nonlinear processes.
Simulation Model
To model chorus waves in the inner magnetosphere, the geomagnetic field is set as a dipole magnetic field , where B 0eq ≈ 3 × 10 4 nT/L 3 and λ is the magnetic latitude.The electron density (in cm 3 ) in the plasma trough is set based on an empirical model (Carpenter & Anderson, 1992;Denton et al., 2002Denton et al., , 2006)), (1) here a is set to 1 and n e0 = 5,800 + 300 MLT (0 ≤ MLT < 6).In normal (nonducted) cases, n d = 0.In the presence of a density duct, n d is given as where L 0 and D L are respectively the central location and the radial width of this duct.We perform three simulation cases: a ducted case with L 0 = 5.5, D L = 0.035, and δn = 0.1n eq (L 0 ), and two nonducted cases with different wave source scales.Our simulations are carried out around L = 5.5 at MLT = 0, which are typical conditions for bursts of electron butterfly PADs based on observational statistics (Peng et al., 2022).
A two-dimensional (2-D) EMHD model (Ke, Gao, et al., 2022) is used to simulate the propagation of upper-band chorus waves.2-D EMHD simulations are widely used to study the propagation of chorus waves (e.g., Hanzelka & Santolík, 2022;Hosseini et al., 2021;Katoh, 2014).Chorus waves are excited by the energetic electrons injected from the magnetotail and drifting eastward around the Earth, which usually consist of quasi-periodical discrete elements with frequency chirping and a majority of them exhibit rising-tone elements (Burtis & Helliwell, 1969;W. Li et al., 2012;Tsurutani & Smith, 1974).The repetition period of chorus elements is found to be correlated with the drift velocity of energetic electrons (Gao, Chen, et al., 2022;Lu et al., 2021), which has a typical value of ∼0.5 s, and each element lasts from ∼0.1 to 1 s (Shue et al., 2015(Shue et al., , 2019;;Teng et al., 2017).The transverse source scale of chorus elements is estimated in the range ∼100-800 km (∼0.016-0.126R E , R E is the earth radius) based on multiple-satellite measurements (e.g., Agapitov et al., 2017Agapitov et al., , 2018;;Santolík & Gurnett, 2003;Shen et al., 2019).In our simulations, parallel upper-band chorus waves with a repetition period
Geophysical Research Letters
10.1029/2024GL108307 T RP = 0.5 s and a typical element duration T D = 0.3 s are launched from an equatorial source region.For each chorus element, its frequency f rises evenly from 0.5f ce to 0.65f ce and its amplitude remains constant B w0 = 0.0011B 0eq,0 = 0.2 nT in the middle phase, but rises (drops) in the initial (end) phase T ini (T end ) = 0.1T D (B 0eq,0 is B 0eq (L 0 ) and f ce is the equatorial electron gyrofrequency at L 0 ).The wave spectra are shown in Figure 1a.
The chorus source scale is set as ΔL = 0.042 (L = 5.479-5.521) in Ducted case and Nonducted Case 1, and 3ΔL = 0.126 (L = 5.479-5.605) in Nonducted Case 2.Moreover, the wave amplitudes decrease from B w0 to 0 in the inner and outer edges (with a width δ L = 0.0042) of each source region.The simulation domain within λ ≈ 15°-15°, has L = 5.4-5.55 in Ducted case and Nonducted Case 1, and L = 5.4-5.63 in Nonducted Case 2. The L-shell width in Case 2 is 1.533 times that in Case 1. Absorbing boundary conditions are applied for waves.
Assuming the wave fields are confined to |λ| < 15°is reasonable for upper-band chorus waves according to satellite statistics (e.g., Meredith et al., 2009;Teng et al., 2019).The simulation grid numbers in parallel and perpendicular directions are N ∥ = 16,000 and N ⊥ = 1,000 (or 1,533) for Ducted case and Nonducted Case 1 (or Nonducted Case 2).
Test-particle simulations are used in combination with the 2-D EMHD simulations.Initially, test electrons are located at the equator at L 0 = 5.5.They are uniformly distributed in the energy E from 2 to 90 keV with ΔE = 2 keV, the equatorial pitch angle α eq from 5°to 89°with Δα eq = 2°(the loss cone ∼3°), and the gyroangle φ from 0°to 354°with Δφ = 6°.A particle weight is given to each test electron according to the initial electron flux distribution assumed as where j 0 is the differential flux at E 0 = 10 keV and α eq = 90°.The index p is set as 1 and 2 at energies less and greater than E 0 , respectively.The initial electron flux distribution is shown in Figure 1b.The similar power law distributions of energetic electrons are widely used in previous works (e.g., Bortnik et al., 2011;L. Chen et al., 2012;Saito & Miyoshi, 2022).The simulation time step is 1.5771 × 10 6 s and the total simulation time is 3 s.
Simulation Results
Figure 1 shows an overview of ducted and nonducted propagations of the chorus waves in our EMHD simulations.Figures 1c-1e show the spatial profiles of magnetic field amplitudes of upper-band chorus waves at t = 0.2 s (marked by the dashed line in Figure 1a) in three simulation cases.Figure 1c illustrates that upper-band chorus waves propagate nearly along the magnetic field lines in a density duct of 10% density reduction.Obviously, these upper-band chorus waves are ducted by the depleted duct, consistent with previous studies (e.g., Liu et al., 2021;Smith et al., 1960).Without a density duct, upper-band chorus waves gradually deviate from the magnetic field lines across the wave source region (simply called source field lines) during their propagation.Most of these nonducted waves completely deviate from source field lines at |λ| < 5°in Nonducted Case 1 and at |λ| < 10°in Nonducted Case 2 (Figures 1d and 1e).The averaged amplitude B w and wave normal angle θ of these chorus waves at f/f ce = 0.55-0.6along L 0 = 5.5 are estimated (in Ducted case and Nonducted Case 2) and shown in Figures 1f and 1g, respectively.In Ducted case, B w and θ of these chorus waves fluctuate around 0.0011B 0eq,0 and 8°at |λ| ≤ 11.5°, respectively.Both B w and θ are not shown at |λ| > 11.5°, since these waves decay sharply due to absorbing boundary conditions.In Nonducted Case 2, B w / B 0eq,0 of these chorus waves decreases from 0.0011 to 0.0006 within |λ| ≤ 7.5°, and then drops quickly to 0.0002 within |λ| = 7.5°-9°, since more nonducted waves deviate from source field lines at higher latitudes.Besides, θ of these nonducted waves increases rapidly with the latitude, reaching up to 36°at |λ| = 9°.
Figure 2 shows the simulation results of the flux distributions of energetic electrons projected on the equatorial plane.Figures 2a-2c present the electron flux distributions at t = 3 s in three simulation cases.Compared to the initial flux distributions (Figure 1b), the flux distributions at t = 3 s are greatly modified by the chorus waves, which are clearly different in ducted and nonducted cases.In Ducted case, chorus waves cause significant flux decreases and increases at α eq ∼ 40°-60°and α eq ∼ 60°-80°, respectively.In Nonducted cases, chorus waves cause both significant flux increases and decreases at α eq ∼ 60°-80°.Besides, the significant flux increases in Nonducted cases occur at higher energies than those in Ducted case.The greatest relative flux increase j t=3s / j t=0s time.These PADs at E = 18 keV become butterfly PADs only in Ducted case.We identify a butterfly PAD based on two criteria.The first is similar to that of Ni et al. (2016): j(90°) < β × j avg (α: 90°), where j(90°) is the electron flux at α eq = 90°, j avg (α: 90°) is the averaged electron flux over the pitch angle range between α and 90°, α is selected from 45°to 85°to determine the maximum value of j avg (α: 90°), and the index β is set as 0.95.This criterion is to identify an electron PAD with a minimum flux at 90°.The second is j max ≥ η × j(90°), where j max is the maximum flux at α eq = 0-90°and the threshold η is set as 1.5.This criterion is to exclude those PADs with slight flux variations, like the PADs at E = 18 keV in Nonducted Case 1 at t ≥ 2 s (Figure 2e).Based on the criteria, the energy ranges (widths) of the butterfly PADs are estimated as 16-44 keV (28 keV), 36-58 keV (22 keV), and 36-54 keV (18 keV) in Ducted case, Nonducted Case 1, and Nonducted Case 2, respectively.
The ratios of the electron fluxes at t = 3 s and t = 0 s in three simulation cases are shown in Figures 3a-3c, which indicate where the fluxes increase or decrease.In Ducted case, the large flux increases appear at E ∼ 16-46 keV and α eq ∼ 60°-80°, while the evident flux decreases appear at lower energies and pitch angles (Figure 3a).In Nonducted Case 1 and Case 2, the large flux increases occur at E ∼ 38-58 keV and α eq ∼ 60°-75°, while the main flux decreases occur at higher energies and pitch angles (Figures 3b and 3c).A box with ΔE = 8 keV and Δα eq = 8°is located in a region with the largest averaged flux ratio in each panel of Figure 3.The central location (E, α eq ) of this box is (39 keV, 74°) in Ducted case, (47 keV, 71°) in Nonducted Case 1, and (46 keV, 66°) in Nonducted Case 2. Figures 3d-3f show the initial flux distributions of the electrons scattered into the box (at t = 3 s) in three cases.In Ducted case, 91% of the scattered electrons are accelerated, and their α eq also increase.
Most of them are initially distributed at E ∼ 15-35 keV and α eq ∼ 15°-70°.However, 77% (or 84%) of the scattered electrons are decelerated in Nonducted Case 1 (or Case 2), and their α eq also decrease.Most of them are initially distributed at energies up to ∼63 keV and pitch angles up to ∼82°.Obviously, these electrons scattered into the box are accelerated or decelerated by upper-band chorus waves through the cyclotron resonance rather than the Landau resonance, which causes the opposite variations of E and α eq .In a word, ducted upper-band chorus waves cause these butterfly PADs of 10s keV electrons mainly by accelerating electrons, while nonducted waves do the opposite.
The typical trajectories of an accelerated electron in Ducted case and a decelerated electron in Nonducted Case 2 are presented in Figure 4, which shows the energy E, pitch angle α eq , and parallel velocity v ∥ as functions of the
Geophysical Research Letters
10.1029/2024GL108307 magnetic latitude.This electron in Ducted case, with an initial energy of E ∼ 30 keV, is first accelerated to ∼50 keV from λ = 8°to λ = 2°, and then undergoes several rapid decelerations at |λ| ≤ 5°, finally reaching 40 keV (Figure 4a).The variation trend of α eq is very similar to that of E for this electron (Figure 4b).The parallel velocity v ∥ decreases with fluctuating around the cyclotron resonant velocities v R in the acceleration phase, and increases sharply with intersecting v R in the rapid deceleration phases (Figure 4c).Obviously, this electron first undergoes nonlinear phase trapping and then potentially experiences phase bunching.This electron in Nonducted Case 2, with an initial energy of E = 52 keV, only undergoes several rapid decelerations, like the electron decelerations in Ducted case.It probably experiences several times of phase bunching at |λ| ≤ 7°, and finally drops to 42 keV (Figures 4d-4f).
If an electron is phased trapped, the phase ζ between its perpendicular velocity v ⊥ and the wave perpendicular magnetic field B w⊥ will change periodically and satisfy 0 < ζ < 2π.In simulation results, phase trapping of an electron is identified by the criterion: the phase ζ satisfies 0 < ζ < 2π for more than five periods.A group of electrons with the same initial E and α eq but uniformly distributed gyrophases are scattered roughly symmetrically by quasilinear scattering, but most of them are scattered to lower E and α eq by phase bunching (e.g., Bortnik et al., 2008).Generally, phase bunching leads to significantly larger energy and pitch angle changes than those caused by quasilinear scattering.We assume such a group of electrons are in a quasilinear scattering period until , where ΔE is the energy change of each electron within a half bounce period T hb (Gan, Li, Ma, Albert, et al., 2020).An electron is assumed to be phase bunched if ΔE < |ΔE| max and Δα eq < |Δα eq | max within T hb , where |ΔE| max and |Δα eq | max are the maximum energy and pitch angle changes within T hb of all the electrons in their quasilinear scattering periods.In three simulation cases, |ΔE| max and |Δα eq | max are found to be about 1 keV and 1°.For the electrons scattered into the box in Ducted case, 78% of them have experienced phase trapping, and 81% of them have experienced phase bunching.Finally, most of them get accelerated as a result of competition between phase trapping and phase bunching.Besides, 64% of the trapped electrons begin their trapping at |λ| > 9°.For the electrons scattered into the box in Nonducted Case 1 (or Case 2), 72% (or 86%) of them have experienced phase bunching, but only 22% (or 25%) of them have experienced phase trapping.Finally, most of them get decelerated due to multiple phase bunchings, which mainly occur at |λ| < 5°.
Discussion and Conclusions
In this study, test-particle simulations in combination with 2-D EMHD simulations have been performed to study the scattering of 10s keV electrons driven by ducted and nonducted upper-band chorus waves.Our simulation results suggest ducted waves cause the butterfly PADs of 10s keV electrons mainly by accelerating electrons via cyclotron phase trapping, while nonducted waves cause these butterfly PADs mainly by decelerating electrons via cyclotron phase bunching.We explain the difference based on nonlinear resonant conditions (Bell, 1984(Bell, , 1986;;Tao & Bortnik, 2010).Cyclotron phase trapping is possible under the condition: which are described in Equations 2-4 of Bell (1986).When the chorus waves resonate with an electron of 10s keV at α eq < ∼60°at |λ| < 10°, there are , where μ 0 is the permeability of vacuum.The blue and red dotted lines mark the cyclotron resonant velocities v R at f = 0.5f ce and f = 0.7f ce .The black and blue asterisks mark the start and end of the trajectories.
10.1029/2024GL108307
where q e and m e are the electron charge and mass, k ∥ is the parallel wave number, B w⊥1 and B w⊥2 are two components of the perpendicular wave magnetic field.The inhomogeneity factors h(z, t) at |λ| < 10°for ducted and nonducted waves are slightly different, which are mainly related to the inhomogeneity of the background magnetic field ∂Ω e ∂z and the wave frequency chirping rate ∂ω ∂t .Obviously, the wave magnetic field amplitudes B w dominate the difference in ω 2 t / h(z,t) of ducted and nonducted waves.Phase bunching is easy to occur when ω 2 t / h(z,t) is much greater than 1.Thus, it easily occurs for both ducted and nonducted waves at lower latitudes (| λ| ≤ 5°) because of the smaller h(z,t) due to the smaller ∂Ω e ∂z ( ∂Ω e ∂z = 0 at λ = 0°) .The amplitudes B w of ducted waves decrease slightly along the magnetic field lines due to the ducted propagation, and easily satisfy ω 2 t / h(z, t) > 1 even at higher latitudes (|λ| > 5°).Thus, ducted waves trap and accelerate electrons to produce the butterfly PADs.Although phase bunching is also involved in the electron dynamics, the acceleration effect due to phase trapping is dominant.The amplitudes B w of nonducted waves decrease rapidly along the magnetic field lines due to the nonducted propagation, and hardly satisfy ω 2 t / h(z, t) > 1 at higher latitudes.Thus, nonducted waves mainly decelerate electrons to causes the butterfly PADs, since phase bunching is dominant in the electron dynamics.Besides, we have also performed two simulation cases with the smaller amplitude B w0 = 0.05 nT (the other parameters are same to those of Ducted case and Nonducted Case 1) and another two simulation cases with L-shell around L 0 = 6.5, and the simulation results give the same conclusions.
Nonducted chorus waves potentially accelerate electrons by Landau trapping, which contributes less to the formation of electron butterfly PADs in our simulations.In Ke, Lu, et al. (2022), a lower-band chorus with a constant frequency f/f ce = 0.4 and the amplitude B w0 = 0.005B 0eq,0 can form the butterfly PADs of tens of keV electrons by Landau trapping.There are two main reasons for the difference.One reason is that the lower-band chorus almost propagates along the magnetic field line at |λ| < 15°.Another reason is that the amplitude of the lower-band chorus is much larger.Therefore, the lower-band chorus is easier to satisfy the condition of Landau trapping for tens of keV electrons at large pitch angles.
In the realistic magnetosphere, the magnetic amplitude of nonducted upper-band chorus waves may decrease faster due to Landau damping (Bortnik et al., 2007), which is not included in our EMHD simulations.Thus, nonducted upper-band chorus waves are more likely to form the electron butterfly PADs through phase bunching.However, the observational evidence is still lacking.Kurita et al. (2018) provided an observation event detected by Arase, which indicates the electron butterfly PADs result from acceleration of lower energy electrons.Unfortunately, the plasma density data are absent in the same period.It is not easy to determine whether the electron butterfly PADs observed by Van Allen Probes in previous studies originate from lower or higher energy electrons due to low resolution (Fennell et al., 2014;Peng et al., 2022).Thus, the observational evidence is also required as a future work.The primary conclusions are summarized as follows.
1. Test-particle simulations in combination with 2-D EMHD simulations demonstrate both ducted and nonducted upper-band chorus waves can form significant butterfly distributions of tens of keV electrons within seconds.2. Ducted upper-band chorus waves tend to cause the butterfly distributions of tens of keV electrons by accelerating electrons via cyclotron phase trapping.3. Nonducted upper-band chorus waves tend to decelerate electrons to cause the butterfly distributions of tens of keV electrons via cyclotron phase bunching.
Figure 1 .
Figure 1.(a) The frequency-time spectrogram of upper-band chorus waves launched from the magnetic equator and (b) the initial electron flux distribution at the magnetic equator in our simulations.(c)-(e) The spatial profiles of magnetic field amplitudes of chorus waves at t = 0.2 s in three simulation cases.(f), (g) The averaged amplitude B w and wave normal angle θ of these chorus waves at f/f ce = 0.55-0.6along L 0 = 5.5 in Ducted case and Nonducted Case 2.
Figure 2 .
Figure 2. (a-c) The electron flux distributions as a function of the equatorial pitch angle and the energy at t = 3 s and (d-f) the pitch angle distributions of electrons at two energy channels at t = 0, 1, 2, and 3 s in three simulation cases.
Figure 3 .
Figure 3. (a-c) The ratios of the electron fluxes at t = 3 s and t = 0 s in three simulation cases.A box with ΔE = 8 keV and Δα eq = 8°is located in a region with the largest averaged flux ratio in each panel.(d-f) The initial flux distributions of the electrons scattered into the box (at t = 3 s) in three simulation cases.
Figure 4 .
Figure 4. (a-c) The energy E, equatorial pitch angle α eq , and parallel velocity v ∥ of a typical accelerated electron in Ducted case as functions of the magnetic latitude λ. (d-f) The E, α eq , and v ∥ of a typical decelerated electron in Nonducted Case 2 as functions of λ.The unit ofv ∥ is V Ae = B 0eq,0 / ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ μ 0 m e n eq (L 0 ) √, where μ 0 is the permeability of vacuum.The blue and red dotted lines mark the cyclotron resonant velocities v R at f = 0.5f ce and f = 0.7f ce .The black and blue asterisks mark the start and end of the trajectories. | 5,503.4 | 2024-05-23T00:00:00.000 | [
"Physics"
] |
Thermally Introduced Bismuth Clustering in Ga(P,Bi) Layers under Group V Stabilised Conditions Investigated by Atomic Resolution In Situ (S)TEM
We report the formation of Bi clusters in Ga(P1-x,Bix) layers during an in situ (scanning) transmission electron microscopy ((S)TEM) annealing investigation. The non-destructive temperature regime in dependence on the tertiarybutylphosphine (TBP) pressure in the in situ cell was investigated to ensure that the results are not distorted by any destructive behaviour of the crystal during the thermal treatment. The following annealing series of the Ga(P92.6Bi7.4) and Ga(P96.4Bi3.6) layers reveals that the threshold temperature at which the Bi clustering takes place is 600 °C in the Ga(P92.6Bi7.4) layer. Further thermal treatments up to 750 °C show a relationship between the Bi fraction in the Ga(P1-x,Bix) layer and the initial temperature at which the Bi clustering takes place. Finally, we investigate one Bi cluster at atomic resolution conditions. In these conditions, we found that the Bi cluster crystallized in a rhombohedral phase, aligning with its {101} planes parallel to the Ga(P,Bi) {202} planes.
Bi containing III/V semiconductor materials have seen growing interest due to their strong reduction of the band gap energy even with a small fraction of Bi 1-3 . Furthermore, a growing Bi content leads to a drastic increase of spin-orbit splitting 4 . For example in GaAs the band gap is reduced by about 90 meV per percent of Bi incorporation 2 . With a Bi fraction higher than 10% the spin-orbit splitting becomes even larger than the band gap 5,6 . Due to this modification Auger recombination processes can be suppressed, leading to an increased efficiency in photonic devices. Several bismide materials showed similar properties to those mentioned ones in Ga(As,Bi) promising new applications in optoelectronics [7][8][9][10][11] . Further theoretical calculations on the properties Ga(P,Bi), which could be a promising candidate for tuning the emission of optical sources as well as for the fabrication of laser devices emitting in the telecommunication wavelength, were reported 12 . Since GaP can be grown nearly lattice matched on Si 13 the possibility to grow Ga(P,Bi) devices on Si exists 14 . The fabrication of materials with a high fraction of Bi is, however, challenging due to its highly metastable nature. Furthermore, the grown Bi containing materials have to have a very high quality in terms of crystal defects to be practical for optoelectronic applications. The investigation by Christian et al. 15 demonstrated a Bi incorporation of 3.2% in GaP using a molecular beam epitaxy (MBE) setup. Later results on metal organic vapor phase epitaxy (MOVPE) grown Ga(P,Bi) samples by Nattermann et al. 14 showed a Bi incorporation of up to 8%. In addition, in situ information with regards to Bi cluster formation during growth and post growth annealing is missing. Furthermore, the mechanisms involved are not yet understood 16,17 . Beside all successes, the optimization of growth conditions for functional III/V semiconductor materials is still very challenging as can be seen by the growing body of publications within this field [18][19][20] . In particular in situ information promises to deliver details about the growth mechanisms and can therefore help to improve the growth parameters. One of the advantages of in situ methods is that they enable live insights into dynamic processes. In situ (scanning) transmission electron microscopy ((S)TEM) has been proven to be an outstanding technique for these kind of observations 21,22 . The compatibility of modern in situ holders with most TEMs makes this technique feasible for a wide range of research. In situ TEM systems, such as the Atmosphere system 23 developed and produced by Protochips (Morrisville, NC, USA), are enabling observations at gas pressures up to 1,000 hPa and temperatures of 1,000 °C. With these specifications the experimental growth conditions in terms of temperature and precursor pressure of MOVPE, which is commonly used to fabricate novel III/V semiconductor compounds, can be realized.
In this paper we use in situ (S)TEM to investigate the threshold temperature at which the local Bi clustering take place. Further we investigate the cluster formation process as a function of the Bi fraction of the Ga(P,Bi) layers under group V stabilized conditions. Furthermore, we give a detailed insight into the Bi cluster characteristics under atomic resolution conditions.
Experimental Methods
The Ga(P 1-x ,Bi x )/GaP sample studied in this work was grown by MOVPE using a commercially available Aixtron horizontal reactor system (AIX 200-reactor) with gas foil rotation. Triethylgallium (TEGa), tertiarybutylphosphine (TBP), and trimethylbismuth (TMBi) were used as precursors for Ga, P and Bi, respectively. The growth temperature for the Ga(P 1-x ,Bi x ) layer was set to 400 °C, while the GaP barriers growth temperature was set to 625 °C to remove surplus Bi from the surface. Further information, including a more detailed description of the MOVPE growth procedure of the Ga(P 1-x ,Bi x )/GaP sample is summarized thoroughly by Nattermann et al. 14 . The sample used for the present study consists of two 55 nm thick Ga(P 1-x ,Bi x ) layers with Bi fractions of x = 3.6% and x = 7.4%, separated by GaP barriers with an approximate thickness of 100 nm. The Bi fraction of each Ga(P 1-x ,Bi x ) layer is determined using secondary ion mass spectrometry (SIMS), with complementary high resolution X-ray diffraction (HRXRD) measurements taken for comparative purposes. Here a HRXRD pattern was simulated around the (004)-GaP substrate peak, assuming the GaBi lattice constant to be 6.33 Å 3 . Due to the micrometer sized spot of the XRD, the resulting content values represent an average. Since potential clusters may exhibit a rhombohedral structure 16 and their size is in the nanometer length scale, it is unlikely that they influence these (004) rocking curves at all. The sample preparation for the presented in situ study was done using a JEOL JIB 4601 F FIB system. The detailed preparation process of the lamella and the loading procedure of the finished specimen into the in situ cell will not be repeated here, but a detailed description can be found in Straubinger et al. 24 . In summary, an electron transparent lamella was prepared out of a wafer piece as described in Schaffer et al. 25 . Upon completion, the lamella was rotated by 90° along the long axis and mounted with the electron transparent section positioned precisely above the electron transparent SiN window of the thermal e-chip produced by Protochips. The STEM investigations were performed in a double-C S corrected JEOL JEM 2200 FS field emission TEM operating at 200 kV. For the in situ investigation, a modified atmosphere system with a gas environmental cell holder from Protochips was used. A detailed description of the system, the specifications, and the required modifications are given in Straubinger et al. 26 . This setup allows for high resolution (S)TEM investigation of III/V semiconductor materials in the growth temperature regime and under the necessary group V stabilization, which is indispensable when aiming to avoid group V desorption. Image generation was carried out in high angle annular dark field (HAADF) mode, also known as Z-contrast mode 27,28 due to the underlying Rutherford-like distribution of the scattered electrons. Hence, the detected intensity is proportional to Z 1.6-2 , which helps to intuitively distinguish between different elements. In order to allow for determining thickness in later stages, the acquired images were normalized with respect to the impinging beam following the procedure described in He & Li 29 . The HAADF intensity is corrected by subtracting the intensity contribution of the SiN windows from the measured intensity of the crystalline sample. This approximation is justified because both SiN windows are amorphous and highly defocused. Subsequent comparison with complementary contrast simulation using the frozen phonon approximation available in STEMsim 30 was used to determine the lamella thickness. A more detailed description of all simulation parameters is given in 26 .
Results and Discussion
This paper is organized as follows: first, by tracking the lamella thickness over the temperature for different TBP pressures in the in situ cell, it should be ensured that the following measurements are not influenced by any destructive behaviour of the crystal structure due to P desorption. Next, the initial Bi cluster formation temperature and the cluster characteristic in terms of cluster size and formation time is determined by analysing the intensity distribution within the Ga(P 1-x ,Bi x ) layers for three temperature series covering the MOVPE growth temperature regime. Here, one question to be answered refers to the time during which the local Bi enrichment takes place in the growth process. Subsequent annealing studies at temperatures significantly above the growth temperature will ascertain the relationship between the Bi fraction in the Ga(P 1-x ,Bi x ) layer and the initial Bi clustering temperature. Finally, a closer insight into the crystalline structure of the Bi clusters within the Ga(P 92.6 Bi 7.4 ) layer will be given. Figure 1 shows a HAADF STEM overview image of the investigated structure as grown, i.e without thermal treatment. As mentioned in the experimental section the sample consists of two Ga(P 1-x ,Bi x ) layers with a different Bi content. The Bi fraction is indicated on the right hand side of the image. Every Ga(P 1-x ,Bi x ) layer has a thickness of around 55 nm and is surrounded by approximately 100 nm of GaP. Due to the large atomic number of Bi (Z = 83), the clusters appear brighter in the HAADF STEM image. Some Bi enriched areas can be observed on the left hand side in the very top Ga(P 92.6 Bi 7.4 ) layer. Due to the thickness of the lamella and the resulting overlay of the rhombohedral crystal structure of a pure Bi cluster and the surrounding zincblende host material, it is not possible to determine whether the bright areas are rhombohedral Bi clusters or Bi enriched zincblende Ga(P 1-x ,Bi x ). Therefore, we will refer to the bright areas within the Ga(P 1-x ,Bi x ) layer as clusters in the following. This is justified because the post thermal annealing investigation at a thin sample region reveals that the bright areas, which occur due to the thermal annealing, consist of rhombohedral Bi 16 as will be shown later. By taking a closer look at the areas around the brighter areas in Fig. 1 the cluster formation. Before investigation of the detailed characteristics of the Bi clusters, it should be clarified whether the cluster formation happens during the growth of the Ga(P 1-x ,Bi x ) layer itself taking place at 400 °C or during the growth of overlying GaP layer acting as a 625 °C thermal annealing for the lower Ga(P 1-x ,Bi x ) layers. To ensure no contribution of already existing Bi clusters, the experiments were carried out in areas with a homogeneous Bi distribution.
To facilitate the investigation of III/V semiconductor materials in situ in the TEM at growth temperature regime, the crystal structure must be stabilized during the in situ annealing investigation to prevent group V desorption. This ensures that the observed changes do not occur due to structural degradation. One method to measure the sample destruction is to track the TEM lamella thickness during the in situ experiment. By doing so it is possible to identify the non-destructive temperature range in dependence of the group V stabilization. Figure 2 shows the results of these thickness measurements. Here, the thickness divided by the initial thickness (t 200 °C ) over the temperature is shown for an unstabilized environment and two different TBP pressures in the in situ cell. The red dots belong to the unstabilized annealing experiment, the black squares and blue triangles show the results of the thermal treatment experiments applying 140 hPa and 180 hPa TBP, respectively. It should be mentioned that the thickness measurement in the unstabilized experiment and in the annealing series with 140 hPa TBP in the cell was carried out on a different TEM lamella within a GaP region with approximately 3% B which is negligible in the analysis. By comparing the stabilized with the unstabilized experiment one can clearly see the tremendous difference in the sample stability during the thermal treatment. Furthermore, by comparing the thermal treatment experiment under 140 hPa TBP (black squares) and 180 hPa TBP (blue triangles) environments, the different non-destructive temperature regimes can be seen directly. With 180 hPa TBP in the cell the crystal structure is thermally stable up to 750 °C, whereas the sample annealed in the 140 hPa TBP environment starts to degrade at around 600 °C. As a result of these findings, all further in situ annealing investigations discussed in this paper are carried out under a 180 hPa TBP stabilization.
To identify the initial temperature at which the clustering takes place the temperature was gradual increased by 50 °C starting with 300 °C which is significantly below the growth temperature of 400 °C and 625 °C, respectively. For every series the temperature was kept for 30 to 45 minutes. The HAADF STEM images recorded during every series in an approximately two minute cycles are the basis for the Bi clustering study within the Ga(P 1-x ,Bi x ) layer. The measured standard deviation of the intensity within the ternary Ga(P 1-x ,Bi x ) layer divided by the mean intensity of the surrounding GaP matrix, hereinafter designated as σ rel , is a measure for the inhomogeneity of the Bi distribution of the corresponding Ga(P 1-x ,Bi x ) layer, as has been shown on the example of Ga(N,As,P) by Wegele and co workers 31 . It should be pointed out that due to the large difference in the surface to bulk ratio and the relaxed strain in the thin lamella, the absolute value for the initial temperature above which the clustering takes place in the MOVPE process and inside an electron transparent TEM lamella are not directly comparable. Nevertheless, the trend is the same and therefore the results presented in the following allow an in situ investigation of a specific MOVPE process inside the STEM under high resolution conditions. Plot (a) in Fig. 3 shows the σ rel within the Ga(P 92.6 Bi 7.4 ) layer gained from the aforementioned HAADF STEM images over the time for three temperature steps. By taking a look at the data points of the 500 °C time series (black squares), no increase in σ rel can be observed. This indicates that no local Bi enrichment takes place. It should be mentioned that this temperature is already 100 °C above the Ga(P 1-x ,Bi x ) layer growth temperature of 400 °C. Also, by further increasing the temperature by 50 °C to 550 °C (red dots) no change in the structure is visible. The blue triangles represent the time series at 600 °C, here the influence of the temperature on σ rel within the Ga(P 92.6 Bi 7.4 ) layer which originates from Bi clustering can be seen. It is worth noting that due to the random Bi distribution within the Ga(P 1-x ,Bi x ) layer, the fluctuation of the sample thickness, and the presence of amorphous layers during sample preparation, the value of σ rel is not zero even at the beginning of the experiment. It should also be emphasised that post in situ energy dispersive X-ray spectroscopy (EDS) measurements shown in Fig. 5(b) prove that the observable bright areas consists of Bi and do not consist of Ga which might form Ga droplets on the surface as observed in destructive temperature regimes. To better illustrate the process of Bi clustering a dashed line as a guide to the eye is added to the annealing series done at 600 °C in Fig. 3(a). Taking a closer look at the curvature of this line, the time course of the structural change is directly observable. The clustering of Bi within the Ga(P 92.6 Bi 7.4 ) layer takes around 40 minutes before the clusters reach a stable size. Later investigation, shown in Fig. 4, at higher temperatures will clarify whether the cluster size increases further with rising temperatures or if there is a stable Bi cluster formation over the whole temperature regime above the threshold temperature. The STEM images presented in Fig. 3(b) show the formation of one big and two smaller Bi clusters during the temperature series at 600 °C. It is worth noting that the very long investigation time of this experiment and the presence of the group V precursor gas in the in situ cell resulted in a carbon coating forming over time, which caused a reduction of the contrast in the STEM images. Therefore, the HAADF intensity in the images presented on the left hand side of Fig. 3(b) are the original STEM images and are presented with the same intensity scale. For comparison, the micrographs shown on the right hand side of Fig. 3(b) are color-coded from blue (min. Intensity) to red (max. intensity) and in their individual intensity scale, since it enhances the visibility of the clusters. The scale of color-coding is shown on the far right hand side. The time passed with respect to the start of the experiment at which each image is recorded is indicated at the very right hand side. Furthermore, the black lines between the plot (a) and (b) relate the STEM image with the corresponding data point. It should be mentioned that the STEM images in Fig. 3(b) show only a part of the larger images investigated to generate the data points presented in Fig. 3(a). To further compare the Bi clustering in dependence of the Bi fraction, plot (c) in Fig. 3 shows σ rel drawn over the time within the Ga(P 96.4 Bi 3.6 ) layer. Here, no Bi clustering could be observed for all three temperature series, i.e. 500 °C, 550 °C, and 600 °C. Overall, the results presented in Fig. 3 already answer the question regarding the time during which the local Bi enrichment occurs within the growth process. It thus can be concluded that the Ga(P 92.6 Bi 7.4 ) layer structure might grow without Bi clusters, and due to the subsequent GaP layer which has a growth temperature of 625 °C, the inhomogeneity of the Bi distribution takes place. This result is further supported by ex situ investigations on Ga(P 92.6 Bi 7.4 ) layers without a GaP cap, where no Bi clusters appear in the structure. Further experiments significantly above the growth temperature should demonstrate the relationship between the temperature at which Bi clustering takes place and the Bi fraction in the Ga(P 1-x ,Bi x ) layer as well as if the cluster size in the Ga(P 92.6 Bi 7.4 ) layer further increases with rising temperature.
To further investigate the initialisation temperature with regard to the Bi fraction, the plots in Fig. 4 show σ rel for the same Ga(P 1-x ,Bi x ) layers presented in Fig. 3 but at temperatures significantly above the growth temperatures. It should be noted that the measurements of the Ga(P 92.6 Bi 7.4 ) layer were carried out at a thinner section on the lamella, therefore the values of σ rel are slightly different compared to the values in Fig. 3(a). However, a closer look at the plot (a) in Fig. 4 reveals that there is no further inhomogeneity in the Bi distribution caused by increasing the temperature from 650 °C up to 750 °C. This confirms the previous observation that the Bi cluster formation reaches a stable size which is steady against higher temperatures. Due to the large spreading of the data points presented in Fig. 4(b), dashed lines are added to the Ga(P 96.4 Bi 3.6 ) layer measurements as a guide to the experiment. By comparing the series recorded at 650 °C (black squares) and the series measured at 700 °C (red dots), one cannot see any hint of Bi clustering which would lead to an increase of σ rel . In contrast to that, it can be assumed that there is a rising σ rel value in the annealing series recorded at 750 °C within the Ga(P 96.4 Bi 3.6 ) layer. From that point of view one can speculate that the Bi fraction in the Ga(P 1-x ,Bi x ) layers is directly related to the initial clustering temperature. Unfortunately, it is not possible to further increase the temperature to investigate the Bi clustering in the Ga(P 96.4 Bi 3.6 ) layer without destroying the crystal structure in a 180 hPa TBP environment. Further studies carried out at higher TBP pressures will investigate this correlation in more detail. planes with a spacing of 0.327 nm are highlighted in Fig. 5(a) with white dashed lines. The inset in Fig. 5(b) shows the inverted FFT of the STEM image shown in Fig. 5(a). The FFT was calibrated using the {200} reflections of Ga(P,Bi) which retain the spacing of pure GaP due to the tetragonal distortion. Here the parallelism between the Bi {101} and the Ga(P,Bi) {202} planes is clearly visible. The aforementioned contrast variation resulting from Bi depleted regions due to migrating Bi during the formation of Bi cluster can also seen in the STEM image in Fig. 5(a) at the lower right edge of the Bi cluster. The preferred moving direction seems to be along the [−101] and [101] direction. This was also observed in STEM images at higher magnifications. The plot in Fig. 5(b) shows the EDS results of the region at the Bi cluster (black line) and for comparison at a region within the Ga(P 92.6 Bi 7.4 ) layer but without Bi clusters (red line). The counts per second (cps) over the energy in keV is plotted in the energy range around the Bi Mα edge. This result proves that the bright areas in the STEM images consist of Bi.
Summary
In this paper, we first demonstrate that the used in situ setup facilitates the investigation of III/V semiconductor materials in a TEM in the growth temperature regime since it provides the necessary group V stabilization. Moreover, we determine the temperature above which the Bi clustering takes place. Based on the two investigated composition values, we find that this initial temperature depends on the actual Bi fraction within the Ga(P 1-x ,Bi x ) layers. By investigating the Ga(P 1-x ,Bi x ) layers at even higher temperatures, we were able to conclude that there is no further Bi rearrangement. The process is most likely limited by the Bi amount available in the thin TEM lamella. Comparing these results to the findings of the as grown sample suggests that the Bi clustering during the MOVPE growth process most likely took place during the growth of the subsequent GaP layer which acts as a 625 °C thermal treatment of the lower Ga(P 1-x ,Bi x ) layer. Finally, a closer look at a representative Bi cluster reveals its rhombohedral structure and its orientation relation to the Ga(P,Bi) matrix. | 5,352.2 | 2018-06-13T00:00:00.000 | [
"Materials Science"
] |
Automated wound care by employing a reliable U-Net architecture combined with ResNet feature encoders for monitoring chronic wounds
Quality of life is greatly affected by chronic wounds. It requires more intensive care than acute wounds. Schedule follow-up appointments with their doctor to track healing. Good wound treatment promotes healing and fewer problems. Wound care requires precise and reliable wound measurement to optimize patient treatment and outcomes according to evidence-based best practices. Images are used to objectively assess wound state by quantifying key healing parameters. Nevertheless, the robust segmentation of wound images is complex because of the high diversity of wound types and imaging conditions. This study proposes and evaluates a novel hybrid model developed for wound segmentation in medical images. The model combines advanced deep learning techniques with traditional image processing methods to improve the accuracy and reliability of wound segmentation. The main objective is to overcome the limitations of existing segmentation methods (UNet) by leveraging the combined advantages of both paradigms. In our investigation, we introduced a hybrid model architecture, wherein a ResNet34 is utilized as the encoder, and a UNet is employed as the decoder. The combination of ResNet34’s deep representation learning and UNet’s efficient feature extraction yields notable benefits. The architectural design successfully integrated high-level and low-level features, enabling the generation of segmentation maps with high precision and accuracy. Following the implementation of our model to the actual data, we were able to determine the following values for the Intersection over Union (IOU), Dice score, and accuracy: 0.973, 0.986, and 0.9736, respectively. According to the achieved results, the proposed method is more precise and accurate than the current state-of-the-art.
Introduction
Rehabilitation therapists assume a crucial role in the home-based therapy of injuries caused by pressure and other types of wounds.Therapists possess a robust understanding of evidence-based practices in the field of wound care.Through collaborative efforts with the interdisciplinary care team, therapists are capable of offering valuable assistance in the management of wound care, hence contributing to its effectiveness.Rehabilitation therapists receive specialized training in wound assessment and documentation and provide critical interventions to improve patient outcomes and the financial sustainability of home healthcare businesses.Physical therapists develop optimal wound treatment plans, including restoring mobility, strengthening, and healing.A wound is considered chronic if healing takes more than 4 weeks without progress (1).There are a variety of things that can impede the usual process of wound healing.People who have comorbidities like diabetes and obesity are more likely to suffer from wounds like these.The care for these injuries comes at a very high financial cost.Patients who are suffering from chronic wounds require more intensive wound care as opposed to patients who are suffering from acute wounds (2).They have to visit a doctor on a consistent basis so the doctor can monitor how well the wound is healing.The management of wounds needs to adhere to the best practices available in order to facilitate healing and reduce the risk of wound complications.The evaluation of wound healing begins with a thorough assessment of the wound.When determining the rate of wound healing, one of the most important aspects of wound care practice is the utilization of clinical standards (3).These guidelines prescribe the usual documentation of wound-related information, such as wound color, size, and composition of wound tissue (4).The conventional technique involves professionals in the area of treating wounds to take measurements of the wound area as well as its tissue structure.This method is labor-intensive, expensive, and difficult to replicate (5,6).In this study, we are introducing an approach for automatically assessing wounds that makes use of automatic wound color segmentation in addition to algorithms that are based on artificial intelligence.
Wound healing is a complex and dynamic biological process that results in tissue regeneration, restoration of anatomical integrity, and restoration of similar functionality (7).According to assumptions, the advanced wound care market is anticipated to surpass a value of $22 billion by the year 2024 (5).This discrepancy may be attributed to the rise in outpatient wound treatments that are presently being administered (8).Chronic wounds are categorized as wounds that have exceeded the typical healing timeline and remain open for a duration surpassing 1 month (9).Chronic wound infections have been found to cause substantial morbidity and make a significant contribution to the rising costs of healthcare (10).The development of advanced wound care technologies is imperative in order to address the increasing financial strain on national healthcare budgets caused by chronic wounds, as well as the significant adverse effects these wounds have on the quality of life of affected patients (11).Currently, there is a significant prevalence of patients experiencing wound infections and chronic wounds.The management of postoperative wounds continues to present a laborious and formidable task for healthcare professionals and individuals undergoing surgery.There exists a significant need for the advancement of a collection of algorithms and associated methodologies aimed at the timely identification of wound infections and the autonomous monitoring of wound healing progress (12,13).A pressure ulcer, in accordance with the European Pressure Ulcer Advisory Panel, is characterized as a specific region of restricted harm to both the underlying tissue as well as the skin, resulting from an action of pressure, shear, and friction.A pressure ulcer is classified as a chronic injury resulting from persistent and prolonged soft tissue compression compared to a bony prominence, as well as a rigid surface or medical equipment (14).The occurrence of diabetic foot ulcer (DFU) represents a significant complication associated with the presence of diabetes (15).DFU is the primary factor contributing to limb amputations.In line with the World Health Organization (WHO), it has been estimated that approximately 15% of individuals diagnosed with diabetes mellitus experience the occurrence of DFU at least once throughout their lifespan (16).Image segmentation is an essential task when it comes to computer vision and image processing (17).The process of image segmentation holds significant importance in numerous medical imaging applications as it aids in automating or facilitating the identification and drawing of lines around essential regions of interest and anatomical structures (18).Nevertheless, it is challenging to generalize the performance of segmentation across various wound images.The presence of various wound types, colors, shapes, body positions, background compositions, capturing devices, and image-capturing conditions contributes to the considerable diversity observed in wound images (19).Wound segmentation in medical imaging has advanced with hybrid models.Relevant studies (20-22) emphasize community-driven chronic wound databases, telemedicine-based frameworks, and M-Health for telewound monitoring.CNNs, ensemble learning, attention mechanisms, and transfer learning improve crop and rice disease detection and breast cancer classification (23-26).Our research builds on these findings to develop a hybrid model for improved chronic wound segmentation accuracy.The objective of our study is to concentrate on the advancement of a deep-learning methodology for wound segmentation.We suggest a unique framework that integrates the advantageous features of the UNet architecture and the ResNet34 model in order to enhance the efficacy of image segmentation tasks.
The main contribution of this study is the development and evaluation of a hybrid model for wound segmentation that seamlessly integrates advanced deep learning approaches with traditional image processing methods.This innovative alliance aims to improve the accuracy and reliability of wound segmentation significantly, overcoming limitations identified in existing methodologies.
The article's remaining sections are arranged in the manner shown below.Related works: Detailed review of previous research in the field.Materials and Methods: Describes the used dataset, the architecture and implementation of the hybrid model, detailing how advanced deep learning techniques integrate with traditional image processing techniques.Results: Presents the results of the experimental analysis evaluating the performance of the hybrid model.Discussion: Analyzes the results, discusses their implications, compares them with existing segmentation methods, and explores potential applications of the hybrid model in clinical practice.Conclusions: Summarizes the main contributions, highlights the importance of the developed hybrid model, and suggests directions for future research.(27) proposed the implementation of an integrated system that automates the process of segmenting wound regions and analyzing wound conditions in images of wounds.In contrast to previous segmentation techniques that depend on manually designed features or unsupervised methods, the study's authors introduce a deep learning approach that simultaneously learns visual features relevant to the task and carries out wound segmentation.In addition, acquired features are utilized for subsequent examination of wounds through two distinct approaches: identification of infections and forecasting of healing progress.The proposed methodology demonstrates computational efficiency, with an average processing time of under 5 s per wound image of dimensions 480 by 640 pixels when executed on a standard computer system.The evaluations conducted on a comprehensive wound database provide evidence supporting the efficacy and dependability of the proposed system.Ohura et al. (28) established several convolutional neural networks (CNNs) using various methods and architectural frameworks.The four architectural models considered in their study were LinkNet, SegNet, U-Net, and U-Net with the VGG16 Encoder Pre-Trained on ImageNet (referred to as Unet_VGG16).Every convolutional neural network (CNN) was trained using supervised data pertaining to sacral pressure ulcers (PUs).The U-Net architecture yielded the most favorable outcomes among the four architectures.The U-Net model exhibited the second-highest level of accuracy, as measured by the area under the curve (AUC) with a value of 0.997.Additionally, it demonstrated a high level of specificity (0.943) and sensitivity (0.993).Notably, the highest values were achieved when utilizing the Unet_VGG16 variant of the U-Net model.The architecture of U-Net was deemed to be the most practical and superior compared to other architectures due to its faster segmentation speed in comparison to Unet_VGG16.Scebba et al. (19) introduced the detectand-segment (DS) method, which is a deep learning technique designed to generate wound segmentation maps that possess excellent generalization skills.The proposed methodology involved the utilization of specialized deep neural networks to identify the location of the wound accurately, separate the wound from the surrounding background, and generate a comprehensive map outlining the boundaries of the wound.The researchers conducted an experiment in which they applied this methodology to a dataset consisting of diabetic foot ulcers.They then proceeded to compare the results of this approach with those obtained using a segmentation method that relied on the entire image.In order to assess the extent to which the DS approach can be applied to data that falls outside of its original distribution, the researchers evaluated its performance on four distinct and independent data sets.These additional data sets encompassed a wider range of wound types originating from various locations on the body.The Matthews' correlation coefficient (MCC) exhibited a notable enhancement, increasing from 0.29 (full image) to 0.85 (DS), as observed in the analysis of the data set for diabetic foot ulcers.Upon conducting tests on the independent data sets, it was observed that the mean Matthews correlation coefficient (MCC) exhibited a significant increase from 0.17 to 0.85.In addition, the utilization of the DS facilitated the segmentation model's training with a significantly reduced amount of training data, resulting in a noteworthy decrease of up to 90% without any detrimental effects on the segmentation efficiency.The proposed data science (DS) approach represents a significant advancement in the automation of wound analysis and the potential reduction of efforts required for the management of chronic wounds.Oota et al. (29) constructed segmentation models for a diverse range of eight distinct wound image categories.In this study, the authors present WoundSeg, an extensive and heterogeneous dataset comprising segmented images of wounds.The complexity of segmenting generic wound images arises from the presence of heterogeneous visual characteristics within images depicting similar types of wounds.The authors present a new image segmentation framework called WSNet.This framework incorporates two key components: (a) wound-domain adaptive pretraining on a large collection of unlabelled wound images and (b) a global-local architecture that utilizes both the entire image and its patches to capture detailed information about diverse types of wounds.The WoundSeg algorithm demonstrates a satisfactory Dice score of 0.847.The utilization of the existing AZH Woundcare and Medetec datasets has resulted in the establishment of a novel state-ofthe-art.Buschi et al. (30) proposed a methodology to segment the pet wound images automatically.This approach involves the utilization of transfer learning (TL) and active self-supervised learning (ASSL) techniques.Notably, the model is designed to operate without any manually labeled samples initially.The efficacy of the two training strategies was demonstrated in their ability to produce substantial quantities of annotated samples without significant human intervention.The procedure, as mentioned earlier, enhances the efficiency of the validation process conducted by clinicians and has been empirically demonstrated to be an effective strategy in medical analyses.The researchers discovered that the EfficientNet-b3 U-Net model, when compared to the MobileNet-v2 U-Net model, exhibited superior performance and was deemed an optimal deep learning model for the ASSL training strategy.Additionally, they provided numerical evidence to support the notion that the intricacy of wound segmentation does not necessitate the utilization of intricate, deep-learning models.They demonstrated that the MobileNet-v2 U-Net and EfficientNet-b3 U-Net architectures exhibit comparable performance when trained on a bigger collection of annotated images.The incorporation of transfer learning components within the ASSL pipeline serves to enhance the overall ability of the trained models to generalize.Rostami et al. (31) developed an ensemble-based classifier utilizing deep convolutional neural networks (DCNNs) to effectively classify wound images into various classes, such as surgical, venous ulcers, and diabetic.The classification scores generated by two classifiers, specifically the patch-wise and image-wise classifiers, are utilized as input for a multilayer perceptron in order to enhance the overall classification performance.A 5-fold cross-validation strategy is used to evaluate the suggested method.The researchers achieved the highest and mean classification accuracy rates of 96.4% and 94.28%, respectively, for binary classification tasks.In contrast, for 3-class classification problems, they attained maximum and average accuracy rates of 91.9% and 87.7%, respectively.The classifier under consideration was evaluated against several widely used deep classifiers and demonstrated notably superior accuracy metrics.The proposed method was also evaluated on the Medetec wound image dataset, yielding accuracy values of 82.9% and 91.2% for 3-class and binary classifications, respectively.The findings indicate that the method proposed by the researchers demonstrates effective utility as a decision support system for wound image classification and other clinically relevant applications.3 Materials and methods
Dataset
The initial dataset comprises around 256 images of laboratory mice with inflicted wounds (35).The study encompasses a total of eight mice, with four mice assigned to Cohort 1 (C1) and four mice assigned to Cohort 2 (C2).The observation period spans 16 days, during which the healing process is monitored.The two cohorts are indicative of two separate experimental conditions, thus necessitating the grouping of results based on cohort.Each mouse in the study exhibits a pair of wounds, one located on the left side and the other on the right side.Consequently, the dataset comprises a collection of time series data encompassing a total of 16 individual wounds.Every wound is bounded by a circular cast that encompasses the injured region.The inner diameter of this splint measures 10 mm, while the outer diameter measures 16 mm.The splint is utilized as a reference object in the workflow due to its fixed inner and outer diameter, providing a known size.It is essential to acknowledge that a significant proportion, precisely over 25%, of the images within this dataset exhibit either missing casts or substantial damage to the splints.The images exhibit various challenges, including variations in image rotation (portrait vs. landscape), ranging lighting conditions, inaccurate or obscured tape measure position, multiple visible wounds, significant occlusion of the wound, and the relative positioning of the wound within the picture frame.The challenges mentioned above prompted researchers to explore the integration of deep learning algorithms with conventional image processing techniques.The approach of pre-processing for wound segmentation involves multiple steps.The images are acquired from the dataset containing photographs of wounds.Subsequently, the labelme tool is employed to designate the wound area and the encompassing region.Data augmentation techniques such as vertical and horizontal flips, transpose, and rotation are employed.The inclusion of diverse data and the subsequent preparation of the dataset were crucial steps in enhancing its quality and suitability for training an effective wound segmentation model.Different samples from the described above dataset are presented in Figure 1.
Data preprocessing
The pre-processing of data is of most tremendous significance in the preparation of input data for machine learning tasks.In the context of wound segmentation, it is imperative to perform image pre-processing in order to improve the quality of the images, extract pertinent features, and augment the dataset to enhance the performance of the model.This section will examine the different stages encompassed in the process of data pre-processing for wound segmentation.In brief, the process of data pre-processing for wound segmentation entails several steps.Firstly, the images are obtained from the wound photo dataset.Next, the wound area and the surrounding region are labeled using the labelme library in Python.Lastly, data augmentation methods such as vertical and horizontal flips, rotation, and transpose are applied.The following steps are essential in improving the dataset, enhancing its diversity, and preparing it for the training of a resilient wound segmentation model.Firstly, the images are obtained from the dataset containing photographs of wounds (35).The images function as the primary source of input for the segmentation task.The dataset comprises a compilation of images portraying various categories of wounds.It is necessary to perform image processing and labeling on each image in order to differentiate the injured area from the surrounding region.The labelme (36) library is employed for image annotation and By implementing these data augmentation techniques on the labeled images, we produce supplementary training instances that exhibit variations in orientation and position.The augmented dataset utilized in this study encompasses a broader spectrum of potential scenarios, thereby enhancing the model's capacity to acquire more resilient features and enhance its precision in accurately segmenting wounds.To retain their relationship with the augmented images, the An illustration of nine different samples (A-I) from the data set used in the study, presented in visual form for more detailed analysis and understanding. 10.3389/fmed.2024.1310137 Frontiers in Medicine 06 frontiersin.orglabels must be altered appropriately during the data augmentation process.In the event that an image undergoes horizontal flipping, the associated labels must undergo a corresponding horizontal flipping as well.Figure 3 provides an example of the augmentation process, illustrating the original image and the augmentation steps applied to it.
Proposed framework
This study introduces a novel framework that integrates the U-Net (37) architecture and the ResNet34 (38) model in order to enhance the efficacy of image segmentation tasks.The proposed framework utilizes the encoding capabilities of ResNet34 as the primary encoder while incorporating the decoder architecture of U-Net to achieve precise and comprehensive segmentation.The diagram depicting the proposed framework is presented in Figure 4, while Figure 5 highlights the main steps of the proposed system.
The framework integrates the robust encoding capabilities of ResNet with the precise and comprehensive segmentation capabilities of U-Net.Through the strategic utilization of the inherent advantages offered by both architectures, our hybrid model endeavors to enhance the efficacy of image segmentation tasks.The integration of fusion and skip connections facilitates a seamless connection between the encoder and decoder, enabling efficient information transmission and accurate segmentation.The framework that has been proposed presents a promising methodology for tackling the challenges associated with image segmentation.It holds the potential to advance the current state-of-the-art in this particular domain significantly.The main elements within the proposed framework may be enumerated as follows:
Encoder-decoder architecture
The proposed framework uses a hybrid model architecture with a ResNet34 as the encoder and a U-Net as the decoder.The integration of ResNet34's deep representation learning and U-Net's efficient feature extraction enables us to derive significant advantages.The utilization of the encoder-decoder architecture has been widely recognized as an effective strategy in the context of image segmentation tasks.This architectural design effectively captures and incorporates both low-level and high-level features, thereby facilitating the generation of precise and accurate segmentation maps.
ResNet encoder
The ResNet34, which is an abbreviation for Residual Network 34, is a highly prevalent deep learning framework renowned for its efficacy in addressing the challenge of vanishing gradients in deep neural networks.The encoder employed in our study is a pre-trained ResNet model, which has undergone training on a comprehensive image classification task to acquire extensive and distinctive features.The ResNet encoder is responsible for taking the input image and iteratively encoding it into a compressed feature representation.
U-Net decoder
The U-Net architecture was initially introduced for the segmentation of biomedical images, with the specific aim of generating precise and comprehensive segmentation maps.The system comprises a decoder that facilitates the restoration of spatial information that was lost during the encoding phase.The decoder in the U-Net architecture is composed of a sequence of up sampling and concatenation operations, which progressively restore the initial image resolution while integrating high-level feature maps obtained using the encoder.
Fusion and skip connections
In order to facilitate efficient transmission of information between the encoder and decoder, our hybrid framework integrates fusion and skip connections.The fusion connections facilitate the integration of feature maps derived from the ResNet encoder with their corresponding feature maps in the U-Net The U-Net architecture that is being suggested with a ResNet-34-based encoder. 10.3389/fmed.2024.1310137 Frontiers in Medicine 08 frontiersin.orgdecoder.The integration of both low-level and high-level features enables the decoder to enhance the accuracy of the segmentation.Skip connections are utilized to establish direct connections between the encoder and decoder at various spatial resolutions.These connections play an important role in facilitating the transfer of intricate details and spatial information between various layers, thereby enabling the achievement of precise segmentation.
Training and optimization
The introduced framework is trained in a supervised fashion utilizing a dataset that has been annotated with segmentation masks at the pixel level.A suitable loss function, such as the Dice coefficient or cross-entropy, is utilized to quantify the dissimilarity between the ground truth and the predicted segmentation maps.The optimization of network parameters is performed using the Adam algorithm, a gradient-based optimization technique, along with a suitable schedule for the learning rate.
Results
This section presents the results of a study aimed at segmenting the wound using the developed hybrid Resent 34 and U-Net model.The research process included the development of the model, data augmentation, training of the model on these data, and testing on actual data not involved in the training process.As a result, the following were obtained:
Development of an algorithm for segmentation of the wound
An algorithm was developed based on a combination of the Resent34 neural network and U-NET architecture.The algorithm consists of several stages, including preliminary data labeling, processing, and building and training of the model.Preliminary processing includes scaling and normalization of the data to improve the quality and speed of the model's training.
Model training
To train the model, a set of data was used, including images of wounds with labeling segmentation.The model was optimized using Adam optimizer, and then the training was carried out on a 12th Gen Intel ® i7.
Testing on real data
After training, the model was tested on actual data that were not used in the learning process.For each image of the wound, the model predicted a segmentation mask indicating the wounded area.Metrics were used, such as Intersection over Union (IOU) and Dice coefficient to assess the quality of the segmentation.
Intersection over Union
The IOU metric is used to assess the similarity between the predicted mask and the actual mask of the wound.IOU is calculated The main steps of the proposed system.by dividing the intersection between the two masks into the area of their association.The higher IOU indicates the best segmentation.
Dice coefficient
The Dice coefficient is also used to measure similarities between the predicted mask and the actual mask of the wound.It is calculated as the double value of the intersection area, divided into the sum of the areas of predicted and actual masks.The high value of the Dice coefficient also indicates a more accurate segmentation.
As a result of testing the model on the actual data, the following values of the IOU, Dice metrics, and accuracy were obtained: 0.973, 0.986, and 0.9736, respectively.These results confirm that the developed wound segmentation algorithm achieves good results and demonstrates a high degree of similarity between the wound-predicted and actual masks.High values of the IOU and Dice metrics indicate the accuracy and quality of the wound segmentation, which is essential for the task of evaluating the wound size and monitoring its healing.The developed wound segmentation algorithm, combining the Resent34 neural network and U-NET architecture, in combination with data augmentation, shows promising results on actual data.It can be a helpful tool in medical practice for the automatic segmentation of the wound and for evaluating its characteristics for a more accurate diagnosis and treatment management.Figures 6, 7 shows the training and validation IOU and loss, respectively.
Below is a table with the results of cross-validation conducted to assess the performance of the proposed model.Cross validation was performed using the k-fold cross-validation method on the training augmented dataset.Model performance evaluation metrics, including IOU and Dice score, are presented in Table 1.
Table 1 presents the results of the model evaluation for each of the k folds, where each fold is used as a test set and the rest of the folds as a training set.For each fold, IOU and Dice scores are provided to provide information about how well the model performs segmentation on each fold.
The averages of these metrics show the overall model's performance on the entire dataset.In this case, the average values of IOU and Dice scores are 98.49% and 99.24% respectively, which indicates a good quality of the model.Cross-validation allows you to take into account the diversity of data and check how stable and effective the model is on different data subsets.
Discussion
The segmentation of wounds is a crucial undertaking when it comes to medical imaging, which entails the identification and differentiation of wound boundaries within images.The accurate diagnosis, treatment planning, and monitoring of the healing progress are contingent upon the appropriate segmentation of wounds.One of the primary difficulties encountered in the process of wound segmentation pertains to the inherent variability observed in the visual characteristics, dimensions, and configurations of wounds.Wounds exhibit a variety of textures, colors, and depths, encompassing a spectrum that spans from minor lacerations to extensive ulcers.The presence of variability poses a challenge in the development of a universal segmentation algorithm that can effectively segment diverse types of wounds.Various methodologies have been suggested for wound segmentation, encompassing FIGURE 6 Training and validation IOU curves.(37).The incorporation of a U-Net as a deep learning model in diverse medical applications (39)(40)(41)(42)(43)(44) has served as a prominent trigger for the motivation behind this investigation.The utilization of U-Net has been widely observed in wound segmentation.Recently, numerous studies (5,19,30,(45)(46)(47) have endeavored to improve their methodologies by developing enhanced models that are built upon the U-Net framework.This study presents a novel framework that combines the U-Net architecture and the ResNet34 model to improve the effectiveness of image segmentation tasks.The integration of the ResNet as an encoder within the U-Net framework for image segmentation has demonstrated remarkable performance in the segmentation of brain tumors, as evidenced by the work conducted by Abousslah et al. (48).The performance mentioned above has served as a source of inspiration for us to put forth a wound segmentation framework.This framework leverages the encoding capabilities of ResNet34 as the primary encoder, while integrating the decoder architecture of UNet.The objective is to attain accurate and all-encompassing segmentation.This section will provide an analysis of the fundamental components involved in the formulation of the model, as well as an examination of its associated benefits.First and foremost, it is essential to acknowledge that the developed model demonstrated noteworthy outcomes in the task of segmenting the injured region.This observation demonstrates the efficacy of the chosen methodology and framework employed in the model.Upon analyzing the results, it was observed that the model exhibits a notable degree of accuracy and a commendable capacity to discern the affected region within the images.This tool has the potential to be a valuable resource for healthcare practitioners in the identification and management of various types of wounds.One of the primary benefits of the developed model lies in its capacity to effectively handle diverse categories of wounds and a wide range of medical images.The model effectively addresses both superficial injuries and intricate wounds.This characteristic renders it a versatile instrument for diverse medical practice scenarios.Furthermore, the model that has been developed exhibits the capability to automate and expedite the process of segmenting areas that have been wounded.Instead of relying on manual allocation and analysis of wounds performed by medical personnel, the proposed model offers a rapid and precise means of determining the precise location of the wound.This will enable healthcare professionals to divert their attention towards other facets of treatment and enhance the overall efficacy of the procedure.This research proposes a strategy that can be applied to other wound types utilizing data and models in clinical practice.Chronic wounds like diabetic or pressure ulcers heal differently than acute wounds like surgical incisions or burns.Thus, data and models used to detect and segment wounds must account for these variances and characteristics, including wound location, shape, depth, infection, and tissue type.Adapting and generalizing the presented technique to additional wound types and animals may result in a more comprehensive and versatile wound analysis tool for clinical practice and wound care research.However, data availability and quality, wound complexity and unpredictability, and ethical and practical issues in the use of animals for wound experiments present challenges.Future research should explore and evaluate these difficulties.
Comparing the performance to the state-of-the-art
As part of this study, we performed wound segmentation using modern deep-learning algorithms.In our work, we set ourselves the goal of surpassing the results of previous studies and increasing the accuracy and efficiency of the segmentation process.In this section, we compare our obtained results with the results obtained by other researchers in the field of wound segmentation.To do this, we provide a table with detailed indicators of IOU and Dice scores that are used to assess the quality of segmentation.Table 2 provides a comparison of wound segmentation results between our approach and previous studies.The table allows us to analyze the advantages and limitations of our approach compared to previous work, as well as identify possible areas for improving the results.Our conclusions and recommendations can contribute to the development of wound segmentation and increase its applicability in the practice of clinical medicine.
Limitations and future scope
The limitation of the present study is the lack of explainability of the proposed hybrid model for wound segmentation.A deep model's inexplicability severely restricts how effectively it can be used.Enhancing the model's explainability is a viable avenue for future scope, because it will raise the model's understanding and applicability
Conclusion
In this paper, we suggest a deep learning methodology aimed at enhancing the generalization features of wound image segmentation.Our proposed approach involves the integration of U-Net and ResNet34 architectures, resulting in a hybrid model.The empirical evidence demonstrates that the Hybrid model yields more precise segmentation outcomes, as indicated by superior scores in terms of Intersection over Union (IOU) and Dice metrics.Furthermore, the Hybrid model effectively minimizes the occurrence of incorrectly identified regions.We discovered that employing the integration of automated wound segmentation and detection improves segmentation efficiency and allows the segmentation model to generalize effectively for out-ofdistribution wound images.Therefore, considering our dataset, the utilization of both U-Net and ResNet34 in the method explained offers a benefit compared to employing the algorithms individually, even when incorporating distinct post-processing procedures.
We conclude that our findings about the hybrid model can be generalized to other medical datasets characterized by diversity in cell densities.Therefore, researchers are strongly encouraged to adopt our proposed methodology for additional research in this area.
labeling in the Python programming language.The labelme library offers a user-friendly graphical interface that allows users to annotate regions of interest within images.In order to facilitate segmentation, distinct classes are created by applying separate labels to both the wound area and the surrounding area.Figure 2 illustrates various instances of data labeling.The next step is data augmentation after the labeling procedure is finished.Data augmentation techniques are utilized to enhance the diversity and quantity of the dataset, thereby potentially enhancing the model's generalization capability.When it comes to wound segmentation, several frequently utilized data augmentation techniques encompass horizontal and vertical flips, rotation, and transposition.The transformations of vertical and horizontal flips involve mirroring the image along the horizontal or vertical axis, respectively.These operations induce variations in the dataset by altering the orientation of the wounds.The process of rotation entails the application of a specific angle to the image, thereby emulating diverse viewpoints of the wound.The operation of transposition involves a straightforward flipping of an image along its diagonal axis.
3
FIGURE 2An illustration of the outcomes obtained from the process of data labeling utilizing the labelme tool, where the blue polynomial line in A-C outcomes indicates the area around the wound, while the red polynomial line indicates the wounded area.
FIGURE 7
FIGURE 7Training and validation loss curves.
(33)g et al. (32)proposed an innovative approach to automatically segment and detect wounds by leveraging the Mask R-CNN framework.Their study employed a dataset comprising 3,329 clinical wound images, encompassing wounds observed in diagnosed with peripheral artery disease, as well as those resulting from general trauma.The implementation of the Mask R-CNN framework was utilized for the purpose of detecting and distinguishing wounds.The outcomes of their methodology were noteworthy, as evidenced by an Intersection over Union score of 0.69, precision rate of 0.77, recall rate of 0.72, average precision of 0.71 and F1 score of 0.75.The metrics as mentioned above serve as indicators of the precision and efficacy of the suggested framework for the segmentation and diagnosis of wounds.Foltynski et al.(33)described an automated service for measuring wound areas, which enables accurate measurements by employing adaptive calibration techniques specifically designed for curved surfaces.The deep learning model, which utilized convolutional neural networks (CNNs), underwent training with a dataset consisting of 565 wound images.Subsequently, the model was employed for image segmentation, specifically to discern the wound area and calibration markers.The software that has been developed is capable of calculating the area of a wound by utilizing the pixel count within the wound region, as well as a calibration coefficient derived from the measured distances between ticks located at calibration markers.The outcome of the measurement is transmitted to the user via the designated email address.The wound models exhibited a median relative error of 1.21% in the measurement of wound area.
patients The effectiveness of the convolutional neural network (CNN) model was evaluated on a total of 41 actual wounds and 73 simulated wound models.The mean values for the accuracy, specificity, Intersection over Union, and dice similarity coefficient in the context of wound identification were found to be 99.3%, 99.6%, 83.9%, and 90.9%, respectively.The efficacy of the service has been demonstrated to be high, making it suitable for monitoring wound areas.Pereira et al. (34) developed a comprehensive system that includes a deep learning segmentation model called MobileNet-UNet.This model is capable of identifying the specific area of a wound and classifying it into one of three categories: chest, drain, or leg.Additionally, the system incorporates a machine learning classification model that utilizes different algorithms (support vector machine, k-nearest neighbors, and random forest) to predict the likelihood of wound alterations for each respective category (chest, leg, and drain).The deep learning model performs image segmentation and classifies the wound type.
TABLE 1
Results of cross-validation of the model based on the k-fold method.
It is also suggested that future research concentrate on employing transformers and other deep models to increase performance.This additional avenue can potentially enhance and optimize the wound segmentation task's performance.Furthermore, studies in this field might result in the development of hybrid modelbased techniques for wound segmentation that are more precise and effective. | 8,229.2 | 2024-01-31T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Discovery of Selenocysteine as a Potential Nanomedicine Promotes Cartilage Regeneration With Enhanced Immune Response by Text Mining and Biomedical Databases
Background Unlike bone tissue, little progress has been made regarding cartilage regeneration, and many challenges remain. Furthermore, the key roles of cartilage lesion caused by traumas, focal lesion, or articular overstress remain unclear. Traumatic injuries to the meniscus as well as its degeneration are important risk factors for long-term joint dysfunction, degenerative joint lesions, and knee osteoarthritis (OA) a chronic joint disease characterized by degeneration of articular cartilage and hyperosteogeny. Nearly 50% of the individuals with meniscus injuries develop OA over time. Due to the limited inherent self-repair capacity of cartilage lesion, the Biomaterial drug-nanomedicine is considered to be a promising alternative. Therefore, it is important to elucidate the gene potential regeneration mechanisms and discover novel precise medication, which are identified through this study to investigate their function and role in pathogenesis. Methods We downloaded the mRNA microarray statistics GSE117999, involving paired cartilage lesion tissue samples from 12 OA patients and 12 patients from a control group. First, we analyzed these statistics to recognize the differentially expressed genes (DEGs). We then exposed the gene ontology (GO) annotation and the Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway enrichment analyses for these DEGs. Protein-protein interaction (PPI) networks were then constructed, from which we attained eight significant genes after a functional interaction analysis. Finally, we identified a potential nanomedicine attained from this assay set, using a wide range of inhibitor information archived in the Search Tool for the Retrieval of Interacting Genes (STRING) database. Results Sixty-six DEGs were identified with our standards for meaning (adjusted P-value < 0.01, |log2 - FC| ≥1.2). Furthermore, we identified eight hub genes and one potential nanomedicine - Selenocysteine based on these integrative data. Conclusion We identified eight hub genes that could work as prospective biomarkers for the diagnostic and biomaterial drug treatment of cartilage lesion, involving the novel genes CAMP, DEFA3, TOLLIP, HLA-DQA2, SLC38A6, SLC3A1, FAM20A, and ANO8. Meanwhile, these genes were mainly associated with immune response, immune mediator induction, and cell chemotaxis. Significant support is provided for obtaining a series of novel gene targets, and we identify potential mechanisms for cartilage regeneration and final nanomedicine immunotherapy in regenerative medicine.
Background: Unlike bone tissue, little progress has been made regarding cartilage regeneration, and many challenges remain. Furthermore, the key roles of cartilage lesion caused by traumas, focal lesion, or articular overstress remain unclear. Traumatic injuries to the meniscus as well as its degeneration are important risk factors for long-term joint dysfunction, degenerative joint lesions, and knee osteoarthritis (OA) a chronic joint disease characterized by degeneration of articular cartilage and hyperosteogeny. Nearly 50% of the individuals with meniscus injuries develop OA over time. Due to the limited inherent self-repair capacity of cartilage lesion, the Biomaterial drug-nanomedicine is considered to be a promising alternative. Therefore, it is important to elucidate the gene potential regeneration mechanisms and discover novel precise medication, which are identified through this study to investigate their function and role in pathogenesis.
Methods: We downloaded the mRNA microarray statistics GSE117999, involving paired cartilage lesion tissue samples from 12 OA patients and 12 patients from a control group. First, we analyzed these statistics to recognize the differentially expressed genes (DEGs). We then exposed the gene ontology (GO) annotation and the Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway enrichment analyses for these DEGs. Proteinprotein interaction (PPI) networks were then constructed, from which we attained eight significant genes after a functional interaction analysis. Finally, we identified a potential nanomedicine attained from this assay set, using a wide range of inhibitor information archived in the Search Tool for the Retrieval of Interacting Genes (STRING) database.
Results: Sixty-six DEGs were identified with our standards for meaning (adjusted P-value < 0.01, |log2 -FC| ≥1.2). Furthermore, we identified eight hub genes and one potential nanomedicine -Selenocysteine based on these integrative data.
Conclusion: We identified eight hub genes that could work as prospective biomarkers for the diagnostic and biomaterial drug treatment of cartilage lesion, involving the novel genes
INTRODUCTION
Cartilage lesion occur as a result of destructive joint diseases, such as osteoarthritis (OA) (Hu et al., 2018). They can cause disability, joint pain, movement limitation, and function impairment (Renders et al., 2014). Currently, there are no efficient actions for cartilage regeneration, due to the substandard inherent repair capacity of the damaged part of the cartilage lesion (Alcidi et al., 2007). Articular cartilage has the unique effect of conducting stress and reducing friction, and its damage can lead to joint dysfunction, and even disability . The articular cartilage itself has no blood supply, nerves, or lymphoid tissues. It also lacks chondrocytes, and has relatively low sculpting capacity . The chondrocytes in the matrix pits have low metabolic activity (Lawrence et al., 2017). They mainly obtain necessary nutrients and excrete metabolites through penetration. The inability or difficulty to repair itself has been a major challenge for the orthopaedic community (Pang et al., 2013). Clinical treatments of articular cartilage lesion can be divided into being either reparative or non-reparative (Uehara et al., 2015). There are two types of reparative surgery: biological and non-biological. Non-reparative operations are debridement and joint irrigation. Non-biological methods, such as artificial joint prosthesis replacement and articular surface shaping, have achieved very good results and functional recovery (Yao et al., 2015;Ragni et al., 2019). Additionally, extreme and burning knee cartilage lesion often need total knee arthroplasty (TKA), which is a surgical option for easing the pain and facilitating knee reconstruction (Math et al., 2006).
The range of methods extensively used for cartilage regeneration in clinical practice involves distinctive types of scaffolds that imitate the native atmosphere, following: (1) mosaicplasty-the replacement of the missing cartilage with an autologous transplant-collected cartilage, such as the double layer collagen type I/III scaffold (MACI) ; (2) microfracturethe disorder of the collagen scaffold, where double layer type I collagen sponge hols chondroitin sulfate to support recruitment of bone marrow stromal cells to the cartilage defect site ; (3) ACI-the vitro culture of autologous chondrocytes that are collected from a special area of the cartilage and then injected into the defect site, covering them with a recyclable collagen membrane (Zhang et al., 2019); and (4) MACI-the transplantation of a viable scaffold surrounding the previous autologous chondrocytes culture (Li et al., 2016). However, currently, the effectiveness of these methods regarding cartilage regeneration remains far from satisfactory (Chen et al., 2020). Consequently, there is a vital need to identify potential mechanisms for cartilage regeneration, and to mine efficacious nanomedicines for regenerative medicine.
Bioinformatics is an emerging interdisciplinary field that manages the storing, repossession, sharing and best use of data and skills, for problem-solving and decision-making purposes (Hasman et al., 2011). The development and renewal of bioinformatics has provided us with the opportunity to mine large databases and uncover more meaningful solutions (Cheng, 2018). Massive databases have increased from cartilage lesion samples during these years, and a great deal of the differentially expressed genes (DEGs) have been determined using the gene ontology (GO) annotation and the Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway enrichment analyses (Huang et al., 2018). At present, some basic bioinformatics tools have been applied in the clinic, providing powerful weapons for clinical diagnosis, prevention, treatment, and clinical efficacy evaluation. For example, the GeneBank database and OMIM database are widely used by clinicians to search for pathogen or human disease-related gene information. They can then design specific primers and probes through biological software for the diagnosis, typing, quantification, and the identification of drug resistance genes of clinical genes. This plays an important role in the prevention of infectious diseases, genetic diseases, tumors, early diagnosis, treatment, and prognosis.
In our analysis, we took the GSE117999 mRNA expression statistics with the Gene Expression Omnibus (GEO). DEG analyses were made from the cartilage tissue from 12 osteoarthritis patients and 12 patients without osteoarthritis (marked control), using the R software (Jalal et al., 2017) by means of the R-limma et al. packages (Ritchie et al., 2015). Subsequently, GO and KEGG analyses were accomplished using the website of DAVID (Database for Annotation, Visualization and Integrated Discovery). The Proteinprotein interaction networks (PPI) networks were constructed, and eight HUB genes were identified. Finally, we discovered a cathelicidin antimicrobial peptide (CAMP) inhibitor, Selenocysteine, using the DGIdb database (Cotto et al., 2018). Using this approach (Figure 1/ Figure 2), we recognized numerous potentially important cartilage regeneration -associated genes and pathways.
MATERIALS AND METHODS mRNA Microarray Statistics and Database Acquisition
The mRNA statistics text microarray GSE117999 (Rai MF et al., 2018) (the Patients/tissues details are given in the Supplementary Material) were download on the community GEO database (Barrett et al., 2013) (https://www.ncbi.nlm.nih.gov/geo), and executed on the GPL20844 platform. GSE117999 contains 12 patients with osteoarthritis, and 12 patients without osteoarthritis (arthroscopic partial meniscectomy). The normalized log2 ratio expressive OA/N-OA of the GSE117999 dataset, normalized by the L and Q process (Sbarrato et al., 2017), was taken. Analysis interested gene IDs were changed to gene symbols on the Agilent-072363 SurePrint G3 Human GE v3 8x60K Microarray 039494 (Feature Number Version). Institutional Review Board approved the study protocol. Prior to participation, a written informed consent was obtained from each patient (Brophy et al., 2018).
DEG Analysis
DEG analysis is the genes expressed at meaningfully levels by numerous methods of analysis (Rahmatallah et al., 2016). We used limma and R to recognize the DEGs in the tissues of OA patients, and compared them to the patients without osteoarthritis (arthroscopic partial meniscectomy). Genes with |log2 FC| ≥1.2 and adjusted P values < 0.01 (moderated tstatistics, corrected by B and H method) were taken in the analysis period (Solari et al., 2017).
GO and KEGG Analysis
GO (Carbon et al., 2017;Thomas, 2017) comprises a set of terms defining gene produces of biological process (BP), molecular function (MF), and cellular component (CC). KEGG (Kanehisa et al., 2017) offers statistics of recognized biological pathways. Meanwhile, by uploading Search Tool for the STRING, 66 hub genes were enriched, and among them eight significant genes were targetable by drugs using DGIdb. (C) Drug-gene interaction: these eight genes were established in DGIdb and the inhibitor Selenocysteine drugs were recognized as having a prospective impact on cartilage regeneration. FIGURE 1 | General text mining approach. The data mining was expanded to distinguish genes connected with osteoarthritis (OA) and patients without osteoarthritis (marked control) (N-OA), using the Gene Expression Omnibus. The obtained genes were investigated for their function and gene pathway using GO and KEGG. The enrichment was attained by PPI with STRING. The final enriched gene list was identified using the DGIdb.
We used DAVID (Dennis et al., 2003) to envision the biological function and pathways enrichment of DEGs (The noteworthy was determined as p-value < 0.05).
PPI Networks Construction
The STRING (Version 11.0) (Wei et al., 2019) catalogue was used. This databank includes over 25 million proteins and interactions, complexed within 5,000 organisms. The interaction score was set to ≥ 0.900, and the PPI (Protein-Protein Interaction) networks were created.
Drug-Gene Interactions
Drugs were selected based on the hub genes that served as promising targets by using the Drug-Gene Interaction Database (DGIdb; http://www.dgidb.org/search_interactions). In this study, the final drug was approved by the Food and Drug Administration (FDA) (Yang et al., 2020). This study was prepared so that data on drug-gene interactions and gene target ability could be acquired. Furthermore, PubChem (https://www. ncbi.nlm.nih.gov/pccompound/) was used to confirm whether the medicines identified in our enquiry could target the genes that we identified.
Quantitative Real-Time PCR (qRT-PCR)
After total RNA was extracted and mRNA purified (RNeasy Mini kit, Qiagen, Hilden, Germany), mRNA was converted to cDNA using the Trans-Scriptor First-Strand cDNA Synthesis SuperMix (TransScript, #AT301, Beijing, China). The assays-on-demand primers and probes and TaqMan Universal Master Mix were used to examine gene expression by the MiniOpticonTM RT-PCR system (Bio-Rad, Hercules, CA, USA) according to the user's manual and amplified detection was conducted using SYBR-Green RealMastcrMix (Bio-Rad). The expression values of mRNAs were normalized against glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and relatively calculated using 2 −DDCt method (Ye et al., 2019).
Identification of DEGs
In our analysis, we recognized DEGs from 12 paired cartilage tissues (OA), using R-limma, compared with N-OA (Figure 1 Table).
GO and KEGG Analysis
To discover further prospective targets of those DEGs in cartilage regeneration, we performed GO and KEGG analyses on OA, using a p value of <0.05 ( Figure 4). As revealed in Figure 4, it demonstrations the all noteworthy terms for each of the following: the BP, CC, MF, and KEGG pathways of DEGs.
The marks of the UP-GENEs and the DOWN-GENEs are shown. As revealed in Table 2, in the BP group, the UP-GENEs were primarily enriched for genes complicated in the intracellular oestrogen receptor signalling pathway, innate immune response in mucosa and antibacterial humoral response. The DOWN-GENEs were enriched genes in the antigen processing and the presentation of peptide or polysaccharide antigen, via MHC class II. They were also enriched in antigen processing and presentation, and the interferon-gamma-mediated signalling pathway. In the CC group, the UP-GENEs were chiefly enriched for genes associated with the vital constituent of the plasma membrane. The DOWN-GENEs were enriched for genes associated with the MHC class II protein complex, the integral component of
PPI Networks and Analysis
The DEGs were uploaded in STRING, and the results were explored. Eight HUB Genes with scores of >0.900 (maximum confidence) were chosen to create the PPI networks: CAMP and TOLLIP, DEFA3, HLA-DQA2, SLC38A6, SLC3A1, FAM20A and ANO8 ( Figure 5). CAMP translates the set of an antimicrobial peptide family, characterized by chemoattractant, immune mediator induction, and immunoreaction regulation (Snoussi et al., 2018;Uysal et al., 2019). Toll interacting protein (TOLLIP) encodes a ubiquitin-binding protein that regulates inflammatory signalling (Diao et al., 2016;Shah et al., 2017). Defensin alpha 3 (DEFA3) belongs to antimicrobial and cytotoxic peptides involved in host defence (Okazaki et al., 2007;Froy and Sthoeger, 2009). Major histocompatibility complex, class II, DQ alpha 2 (HLA-DQA2) are a family of the HLA class II alpha family. Many investigators suggest this involved in the release of CLIP molecule (Rudy and Lew, 1994). Solute carrier family 38 member 6 (SLC38A6) has possibly a relation with the glutamate-glutamine cycle regulation, responsible for preventing excitotoxicity (Schioth et al., 2013;Bagchi et al., 2014). Solute carrier family 3 member 1 (SLC3A1) translates a type II membrane glycoprotein that encodes neutral amino acids associated with cystinuria (Ma et al., 2018). FAM20A golgi is associated with the secretory pathway pseudokinase (FAM20A), which encodes a protein that might function in haematopoiesis and is associated with amelogenesis imperfecta and gingival hyperplasia syndrome (Beres et al., 2018;Koruyucu et al., 2018). Anoctamin 8 (ANO8) is associated with a human disorder that is often overexpressed in diverse cancers (Katoh and Katoh, 2005;Ousingsawat et al., 2011).
Drug-Gene Interactions
The functional enrichment investigation was conducted to identify the last prospective HUB genes with the sifter stricture, p value <1X10^10, as the edge in the GenCLiP 2.0 website (Figure 6). DGIdb database was used to hunt the potential targets CAMP and its small organic compounds Selenocysteine. Selenocysteine is the main form of selenium in proteins. The study of selenocysteine biosynthesis and the mechanism of protein participation is a classic protein biochemical. This important supplement of molecular biology is also the basis for further research into the biological functions and applications of selenoproteins (Stadtman, 1996). The structure of Selenocysteine is shown in Figure 7.
The Expression of CAMP was Inverse in OA Cartilage Tissues
To examine the expression of mRNAs in the articular cartilage, qRT-PCR was performed in 3 samples. We found that the expression of CAMP was significantly upregulated in the OA group compared with the normal group (Figure 8).
DISCUSSION
This analysis was designed to recognize prospective cartilage regeneration -associated genes, by comparing cartilage tissues in patients without osteoarthritis (arthroscopic partial meniscectomy). Thirty-five UP and 31 DOWN DEGs were recognized. We accomplished GO and KEGG comment examinations. Next, a PPI network was created, and eight significant genes were recognized. Finally, eight genes, CAMP and TOLLIP, DEFA3, HLA-DQA2, SLC38A6, SLC3A1, FAM20A, and ANO8 were identified as being significantly associated with immune response, immune mediator induction, and cell chemotaxis. The CAMP inhibitor Selenocysteine may be a nanomedicine potential candidate for cartilage regeneration. In summary, only the CAMP gene has its commercial inhibitor of the eight hub genes, there have been many studies into the role of transcription factors in regulating the promoter region of the CAMP gene (Horibe et al., 2013). These studies have revealed that many biomolecules and factors can regulate the expression of the human antibacterial peptide CAMP gene, and that different signalling pathways can also affect the expression of the gene (Hancock and Diamond, 2000). Due to the important role that the CAMP protein plays in infectious diseases, tumors, and other diseases, it may be a target for the diagnosis and treatment of these diseases (Kovach et al., 2012). It is expected that the molecular regulatory mechanism of the CAMP gene during the occurrence of diseases, especially infectious diseases, will become a topic of significant further research (Rosenberger et al., 2004).
Selenocysteine is the main form of selenium in proteins. The determination of its codon UGA increases the number of amino acids that make up the biosynthetic protein, from 20 to 21 (Mousa et al., 2017). It is also the only amino acid that contains a metalloid element. It is mostly located in the active center of selenoprotein or selenoenzyme (especially in antioxidant enzymes). At the same time, as an essential trace element of the human body, it has a very significant role in anti-oxidation, immune regulation, and antitumor roles (Hatfield et al., 2014;Varlamova and Cheremushkina, 2017). This important supplement of molecular biology is also the basis for further research into the biological functions and applications of selenoproteins (Stadtman, 1996). It is one of the hotspots of research in the fields of protein biochemistry and molecular biology (Ren et al., 2018).
Interestingly, Se deficiency has been proposed as an underlying contributing factor for the chronic osteochondral disease Kashin-Beck disease, an important but neglected disease in parts of China. It was previously shown that Se (IV), as well as superoxide dismutase, can prevent damage done to cultured human embryonic cartilage cells caused by various etiological environmental substrates, and increase the activity of GSHpx while decreasing the production of lipid peroxides (Peng and Yang, 1991). In another report, the disease was associated with the incidence of Se deficiency in regions where the disease is prevalent (Peng et al., 1992). Finally, a mixture containing glycosaminoglycans, selenium, and vitamin E was shown to be exceptionally capable of promoting osteochondral repair in a rabbit model of knee osteochondral defect after 6 weeks of treatment (Handl et al., 2007). These past studies provide some evidence to support our current finding that selenocysteine may be effective drug to support cartilage regeneration.
To conclude, 66 prospective candidate cartilage regenerationassociated genes, which have earlier been involved in numerous pathways related to pathogenesis. All of these DEG applicant cartilage regeneration -associated genes should be further established through biological trials. Moreover, CAMP, TOLLIP, DEFA3, HLA-DQA2, SLC38A6, SLC3A1, FAM20A, and ANO8, as prospective markers for cartilage regeneration; they have not been linked previously, either in diagnosis or in research. Thus, the eight targets were considered to be potential therapeutic targets of cartilage regeneration. Thus, the CAMP inhibitor selenocysteine is considered to be a potential nanomedicine candidate for regenerative medicine.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 4,547.2 | 2020-07-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
A symmetry-based length model for characterizing the hypersonic boundary layer transition on a slender cone at moderate incidence
The hypersonic boundary layer (HBL) transition on a slender cone at moderate incidence is studied via a symmetry-based length model: the SED-SL model. The SED-SL specifies an analytic stress length function (which defines the eddy viscosity) describing a physically sound two-dimensional multi-regime structure of transitional boundary layer. Previous studies showed accurate predictions, especially on the drag coefficient, by the SED-SL for airfoil flows at different subsonic Mach numbers, Reynolds numbers and angles of attack. Here, the SED-SL is extended to compute the hypersonic heat transfer on a 7 ∘ half-angle straight cone at Mach numbers 6 and 7 and angles of attack from 0 ∘ to 6 ∘. It is shown that a proper setting of the multi-regime structure with three parameters (i.e. a transition center, an after-transition near-wall eddy length, and a transition width quantifying transition overshoot) yields an accurate description of the surface heat fluxes measured in wind tunnels. Uniformly good agreements between simulations and measurements are obtained from windward to leeward side of the cone, implying the validity of the multi-regime description of the transition independent of instability mechanisms. It is concluded that a unified description for the HBL transition of cone is found, and might offer a basis for developing a new transition model that is simultaneously of computational simplicity, sound physics and greater accuracy.
few results are satisfactory, owing to the complexity from the interplay of multiple factors [2] and from the variation of influential pattern in a wide factor range (e.g. for the nosetip bluntness effect [5]). Since nearly all transition models include a correlation equation to define the transition onset location [6,7], which is poorly predicted in general, the existing models usually give insufficient and even undetermined prediction accuracies [8,9]. Furthermore, HBLs are generally governed by different instability mechanisms on different surface areas [1,10,11], which results in enormous complexities in modeling and simulating HBL transitions [12,13]. The status calls for a reflection: whether one should always rely on a correct description of the transition mechanism for the foundation of a transition model, or perhaps might discover a universal principle governing transitional boundary layers, and work out a more efficient transition model. Here we take the latter perspective and attempt to develop a new approach.
To begin with, we propose a notion of "similarity structure" of boundary layer, which ensures a similarity of the flow for varying Reynolds number (Re), Mach number (Ma), etc. We further introduce a notion of "order function" as the similarity variable to display the right symmetry of the similarity structure. Specifically, for boundary layers, the most important symmetry is the symmetry under a dilation transformation from the wall in the wall-normal direction and that from the leading edge in the streamwise direction. And the similarity variable is a (stress) length (SL) that characterizes the size of eddies most relevant to the momentum transport (so as to define the eddy viscosity). In the socalled structural ensemble dynamics (SED) theory [14], it has been identified that a wallnormally four-layer structure (corresponding to the viscous sublayer, buffer layer, loglayer and bulk flow) is the similarity structure of turbulent boundary layer (TBL). And its dilation symmetry is expressed by the SL function with a unique scaling law with the wall distance within each layer, as well as a universal scaling transition between adjacent layers, which have been derived from a novel Lie-group analysis of the momentum and energy balance equations [14,15], and validated in detail for the canonical (i.e. zero-pressuregradient flat-plate) TBL [14]. As a consequence, the mean-flow profiles of the canonical TBL have been predicted, for the first time, over the whole boundary layer thickness and for all Re, at an unprecedented accuracy compared to direct numerical simulation (DNS) and experimental data [14,16,17].
More recently, a streamwise three-layer structure (corresponding to the laminar, transitional and fully-developed TBL states, respectively) was suggested for the SL function to describe transitional boundary layers, which yielded a new algebraic transition model called SED-SL [18]. Application of the SED-SL to several airfoil flows (e.g. NACA0012 and RAE2822) at different subsonic Ma, Re and angles of attack (AoA), yielded an unprecedented accuracy (up to a few counts) in the drag prediction [19], indicating the validity of the concept of the similarity structure and pointing out an interesting new direction for turbulence model construction. It is interesting to know whether or not this symmetry-based model can capture the universal features of transition processes, whose parameterization might show more clear dependency on the influential factors, without involving specific instability mechanism that causes the transition. We present here a preliminary study to apply the SED-SL to the HBL transition over a straight cone with an AoA, to further validate the merit of the model.
In the following, we employ the SED-SL to compute the transitional flows on a 7°halfangle straight cone at Ma 6 and 7 and AoA from 0°to 6°. It is shown that, by properly setting the two-dimensional, multi-regime structure of the HBL over the cone surface through only three parameters of clear physical meaning (i.e. a transition center, an aftertransition near-wall eddy length, and a transition width quantifying transition overshoot), the SED-SL correctly reproduces the heat flux distribution on the whole cone surface, in excellent agreement with the wind tunnel data. We demonstrate a simple adaptation of the SED-SL to different flow parameters (e.g. Ma, AoA, free-stream disturbances), and to different instability mechanisms (which vary with AoA and the circumferential angle of the cone). Thus, we have proposed a unified description for the HBL transition under the change of several influential factors and transition mechanisms. With future studies to determine the variations of the three physical parameters with the surface geometry and flow conditions, the SED-SL has a potential to lead a new generation of transition model that is simultaneously of computational simplicity, sound physics and greater accuracy.
The paper is organized as follows. Section 2 describes the SED-SL model and the computational setup. Section 3 presents the results of computation, which validate our predictions for different flow cases. Section 4 is devoted to discussion and conclusion.
Theory
In the Favre RANS approach, the conservation equations of mass, momentum and energy, under the thin-layer and Boussinesq approximations and assuming constant turbulent Prandtl number, are written as follows [12]: where overbar denotes Reynolds averaging and tilde denotes Favre averaging; ρ is fluid density, U j is the j-th component of velocity and x j represents the Cartesian coordinate. P = ρRT is the static pressure (R is the gas constant) and total entropyH =Ẽ +P/ρ, where total energy E = C v T + K with C v the specific heat at constant volume, T the static temperature and K the kinetic energy of the flow K = 1 2 U k U k . The molecular Prandtl number Pr is 0.72, and the turbulent Prandtl number Pr t is set to 0.9. The stressσ ij is expressed by using the Boussinesq approximation as follows: ∂x i is the strain rate tensor. The molecular viscosity μ is computed by using the Sutherland law: μ , where T S = 110.4(K) and subscript ∞ indicates the free-stream property. The turbulent viscosity μ t has to be modeled.
The SED-SL transition model
The SED theory has extended the classical mixing length concept [20] to multiple Reynolds stress lengths to describe both the Reynolds shear stresses and the Reynolds normal stresses [15]. The Reynolds shear stress length 12 in a compressible TBL is defined as: whereŨ is the streamwise mean velocity, u and v are the Favre fluctuations of the streamwise and wall-normal velocity components, respectively. 12 has been interpreted as the characteristic size of eddies responsible for the wall-normal turbulent transport of momentum. It is applied for expressing μ t of TBL in the SED-SL model as follows [17][18][19]: whereω = ∂ yŨ − ∂ xṼ , following the same convention as in the Baldwin-Lomax model [21]. The SED theory predicts that the stress length 12 possesses a multilayer form in fullydeveloped TBL [14,15]: +inner 12 +outer 12 where y + is the distance from the wall in the wall unit (i.e. y + = yu τ /ν w , ν is the kinematic viscosity, u τ is the friction velocity: u τ = μ w dŨ dy w /ρ w , and subscript w denotes wall value), y + buf is the thickness of the buffer layer, + 0 = 9.7 2 κ/y + buf is the near-wall eddy length, and κ is the Karman constant. r = 1 − y/δ is the outer coordinate, being the distance (to the wall) from the outer dilation center (at δ) of the TBL. In the current model, δ is evaluated following the Baldwin-Lomax model [21] by where y max = y max +inner 12 |ω| . Eq.(7)-Eq.(9) describe a wall-normally four-layer structure for the canonical TBL, including the viscous sublayer (with thickness 9.7 and power law of + 12 ∝ y + 3/2 ), buffer layer (with thickness y + buf and power law of + 12 ∝ y + 2 ), log-layer and bulk/wake flow (with the 1 − r 4 structure); it is easy to verify that for y + y + buf and r ≈ 1 in Eq.(7), i.e. in the matching region between the inner and outer regimes of TBL, one obtains the celebrated linear law + 12 = κy + of Prandtl [20]. For the canonical TBL the SED asserts that y + buf is about 41, and κ, 0.45, thus + 0 is about 1.03. Consequently there is no free parameter in Eq.(7)-Eq.(9), but adequate to accurately predict the whole mean velocity profile of canonical TBL at all (above moderate) Re [14]. As to non-canonical TBLs, the above four-layer structure may be deformed by the influential factors (such as pressure gradient) and evolve spatially. A crucial assumption of the SED is that, because the wall constraint remains the dominant effect, the deformation is only finite, slow and continuous, such that it can be described with variable multi-layer structure parameters. Two parameters have been identified to be crucial: + 0 and y + buf , which determine the global size and location of the strongest eddies (which are vulnerable to environmental variation) and thus characterise the major deformation of the boundary layer. The above assumption has been validated for a series of non-canonical TBLs, in particular, the transitional boundary layer (TrBL) [18,19].
To describe the TrBL, Xiao and She [18] have proposed a two-dimensional multi-regime structure by postulating a streamwise multilayer dilation with respect to the leading edge (i.e. x = 0) for + 0 and y + buf . Here, for the transitional HBL of cone, they are written as follows: in which three transition parameters are introduced. x * is called the transition center, being a newly-identified indicator for the transition location that is believed superior in physics than the conventional transition onset location [18]. x * requires further modeling and remains an adjustable model parameter in the present study. + 0∞ characterises the near-wall eddy size of the fully-developed TBL; it determines the skin friction and surface heat flux after the transition. + 0∞ depends on pressure gradient, Ma and wall temperature (T w ), etc. [19]. For not-far-from adiabatic TrBL with less than moderate pressure gradient, such as the current cone flows, + 0∞ is around unity, i.e. the canonical TBL value; for hypersonic flow over significantly cooled wall, it can be substantially smaller than unity owing to the strong squeezing effect of very large Ma and very small T w on the eddy size. β l ≥ 1 is a coefficient that quantifies the strength of the transition overshoot (a phenomenon that the skin-friction coefficient and Stanton number at around the transition end considerably surpass their TBL values at the same Reynolds number). It is the ratio of the location where the transition starts to relax to the fully-developed TBL to the location of the transition center, thus it (when larger than unity) defines a unique transition regime between the laminar and fully-turbulent regimes. The larger the β, the wider the transition regime, the stronger the transition overshoot, and the higher the peak heat flux. When β = 1, no transition regime or transition overshoot appears. For most by-pass transitions, the typical value of β is about 1.1.
Note that the scaling exponents 5 in Eq. (11) and 5.5 in Eq. (12) are empirical parameters determined previously by using experimental data of a series of bypass transitions [18,19]. They quantify the streamwise establishment of the wall-normally four-layer structure of TBL during the transition process. Or, frankly speaking, they quantify the transition speed. For most bypass transitions subjected to free-stream disturbances in conventional wind tunnels, the current setting is adequate. In the DNS cases, however, the transition seems to be much quicker than that in the experiment and larger scaling exponents are required to gain an accurate description. See the discussion in [19].
Here we apply the above model to investigate the hypersonic transitional flows over straight cones at moderate AoA. As is presented, varying the three model parameters yields accurate descriptions of a substantial variety of surface heat flux distributions in the HBL transition of straight cone. A note about the computation is that, the streamwise coordinate x in Eqs. (11) and (12) is approximated by the axial coordinate of the cone (which is also denoted by x with x = 0 being the cone nose tip) for computational simplicity, which affects very little at moderate AoA.
Numerical implementation
The RANS equations closed by the SED-SL model were solved by using the CFL3D, which was developed in the early 1980s in the Computational Fluids Laboratory at NASA Langley Research Center, and has been used to support numerous NASA programs since then and continues to be used today, particularly for validation and verification of newly developed turbulence models. The current SED-SL code was developed based on the code of the B-L model in CFL3D, considering their apparent similarity.
Three sets of grid have been tested for the 7°half-angle cone. Grid one has a uniform mesh with grid number 31 in the circumferential direction (from 0°to 180°, i.e. a halfmodel simulation), an increasingly coarsening mesh with grid number 173 in the axial direction, and a stretching mesh with grid number 131 in the wall normal direction. The physical dimension of the grid is: 1275 mm of cone length, 64 mm of radius in the front plane and 1325 mm of radius in the end plane of the cone. It has been verified that the first wall-normal mesh has a dimension of y + < 0.7 over the whole cone surface for all the investigated cases. Grid two is doubled in the circumferential direction, and grid three is doubled in both the circumferential and axial directions, for testing grid convergence.
The boundary conditions are set as the following. The free stream boundary condition is used for the inflow. The isothermal, no-slip, solid-wall boundary condition is employed on the cone surface. Since we have conducted a half-model simulation, the symmetry condition is applied on the symmetry plane. Finally, the extrapolation condition is used for the outflow.
As to the numerical algorithms, an implicit approximate-factorization method is applied for time advancing. The viscous fluxes are computed with the second-order central differences, and the inviscid fluxes are computed with the upwind flux-differencesplitting method. The RANS equations are solved in parallel on a workstation with 16 CPU cores.
The experimental data sets used for validation are listed in Table 1. They are selected from the very recent measurements on the surface heat flux and/or surface temperature of 7°half-angle straight cones at moderate AoA, all covering a major portion of cone surface with high measurement accuracy. The first experiment was conducted by Willems et al. [22] in the hypersonic wind tunnel (H2K) of the German Aerospace Center (GAC) in Cologne. Surface temperatures and heat fluxes were measured by means of quantitative infrared thermography for the whole cone surface. The second experiment was performed by Chen et al. [23] in the 1m Hypersonic Wind Tunnel of the China Aerodynamics Research and Development Center (CARDC). Only surface temperature was measured for the windward and leeward cone surfaces by quantitative infrared thermography. In Table 1, R N denotes the cone nose-tip radii, P 0 is the total pressure, T 0 is the Table 1 The flow parameters of the test cases total temperature, and T w is the cone surface temperature set in the current numerical simulation.
Result
Before presenting the results, the effects of the grid and isothermal wall condition on the simulation results have been assessed. For the grid convergence, it has been found that grid one is adequate for AoA no more than 4 • , but grid two has to be used for accurately calculating the surface heat flux around the leeward center of the cone at AoA = 6 • , because of the strong streamwise vortices there. The other issue is about the isothermal wall condition applied in the simulation. In the GAC experiment, the cone surface temperature was measured by quantitative infrared thermography and the surface heat flux was derived accordingly based on a heat-transfer model. One finds in [22] that for the GAC cases the surface temperature in the turbulent region is clearly larger than that in the laminar region, and the temperature difference can be as large as 60 K for the GAC-6 case, which is about 10% of the stagnation temperature (whereas in the CARDC-6 case, only about 1%). So the cone surface is not actually isothermal as the setting of the numerical simulation. We have assessed the effects of the isothermal wall condition on the surface heat flux distribution by increasing the wall temperature from 300 K to 340 K in the GAC-6 case. Indeed, about 15% decrease of heat flux can be found in some locations. Neglecting such a discrepancy, the isothermal wall condition is utilized for simplicity in the current studies. Validation of the model is conducted by comparing the surface heat fluxes. The dimensionless heat flux C h (or the Stanton number St) is defined as: where T r = T ∞ 1 + r(γ − 1)Ma ∞ 2 /2 [24] with γ = 1.4, r = √ Pr [25] (i.e. the laminar value. Changing to the turbulent value of r = Pr 1/3 leads to a change of St smaller than a few percent.), k is the thermal conductivity, and C p is the specific heat at constant pressure. The focus on C h is because of the following. C h is the most widely measured quantity and the upmost concern in engineering applications. It indicates the location, strength and extension of the transition process, and provides a strong constraint to the flow field: if the TrBL is not correctly simulated, the C h distribution can hardly be correct.
The current study serves a purpose to validate the SED-SL description of the various hypersonic TrBLs over the cone through the aforementioned three model parameters (i.e. x * , + 0∞ , β l ), which are determined after a prediction-correction procedure: first, setting the parameter values; second, solving the RANS equations with the SED-SL model; third, comparing the numerical prediction with the experimental results and updating the parameters until an accurate prediction is achieved. Note that such a procedure can easily be implemented only because the current model parameters have clear physical meanings and are directly related to the shape of the C h . By continuously repeating this procedure in different sample flow cases, the SED-SL eventually will cover a wide flow regime and gain full prediction capacity for engineering applications.
Simulation of the GAC experiment
Following the above procedure, the three parameters are measured for the GAC and CARDC cases. In the current flows + 0∞ is found invariant with AoA and the circumferential angle θ, and identical to the canonical TBL value (≈ 1.0), which perhaps is because Ma is not very large, the cone surface is not very cold, and the pressure gradient is mild. In comparison, β l and especially Re x * vary with AoA and θ, as shown in Fig. 1. Fig. 1 Variations of a the transition center Reynolds number Re x * and b the transition overshoot strength parameter β l with the circumferential angle θ for the GAC and CARDC test cases Re x * (= x * Re ∞ ) is a new indicator of the transition front. At AoA = 2 • and 4 • of the CARDC cases, Re x * varies almost linearly with θ and becomes flattened near θ = 0 • and 180 • to adapt to the symmetry condition. At AoA = 6 • , one finds in Fig. 1a that Re x * begins to break up into three regions: a windward region with a significantly postponed and rapid changing transition front, a wide cross-flow region with a relatively slowly advancing transition front, and a narrow leeward region with a much earlier transition front. Such a behaviour is more apparent for the GAC-6 case than the CARDC-6 case. The reason for this break-up is likely because of the circumferential variation of the dominant transition mechanism at moderate AoA [11]. Indeed, the Mack second mode that governs the whole cone surface at zero incidence shrinks to a portion of the windward region with increasing AoA. The cross-flow mode then occupies the main surface of the cone owing to the wide-spread pressure gradient in the circumferential direction, leaving only a narrow leeward region for the streamwise vortex mode, where streamlines converge and form counter-rotating streamwise vortices that promote the transition.
The variation of β l is simpler (Fig. 1b). It is invariant with AoA and θ in the crossflow and leeward regimes, but decreasing (linearly, in the present empirical setting) with decreasing θ in the windward region. In the cone windward, β l also decreases with increasing AoA, which means that the transition overshoot becomes weak in the cone windward as the AoA is increased, owing perhaps to the stronger compressibility there. More studies are needed to clarify this phenomenon. In addition, there is a clear difference regarding the transition overshoot strength between the GAC-6 and CARDC-6 cases. The CARDC-6 cases have much stronger transition overshoots than those of the GAC-6 cases. The reason is unclear, but it may be due to the different free-stream noise levels of the CARDC and GAC wind tunnels [26]. Figure 2 compares the SED-SL computed St distribution along the meridian line of the cone surface with the measured one for the GAC-0 case. The simulation excellently agrees with the measurement throughout the whole transition process. Especially, the transition onset location (defined as the location with minimum St) and the peak surface heat flux are computed very accurately, whose relative errors are less than 2%. Figure 2 also includes a comparison between the St distribution and the measured surface temperature increment ( T w , with a reference T w of 300 K) distribution with proper normalization. The reason for this comparison is explained as the following. HBL transition has been more and more frequently diagnosed by quantitative infrared thermography for its high fidelity, wide angle of view and nonintrusion. However, the technique measures only the surface temperature and the heat flux has to be derived accordingly by an algorithm with specific heat transfer assumptions. In the CARDC experiments (and many others), only the surface temperature data are provided. Thus it is interesting to study the relationship between T w and St, to identify whether or not T w can be an acceptable approximation of the surface heat flux. In Fig. 2 one finds that, for the GAC-0 case, by introducing a proportional factor to collapse T w with St in the laminar region, the rescaled T w almost collapses with St over the whole TrBL, except a rather limited distinction owing mainly to the discrepancy of the transition onset location. Although this apparent similarity has to be further validated with heat transfer models, it is understandable because the measurements are often conducted with a very short duration, such that T w is strongly correlated with the local surface heat flux. Therefore, in the next section, the CARDC data of T w are utilized as substitutes for studying the heat flux distributions, which is more convincing considering their temperature increments are much smaller than those of the GAC cases. Figure 3 compares the SED-SL computed St contour with the measured one for the GAC-6 case. The simulation result is almost identical to the measured contour over the whole cone surface. A unique behaviour in the experiment is that, there is an array of distinct, standing cross-flow vortices at around θ = 135 • , extending streamwise over a considerably long length with a slight inclination towards the leeward side of the cone. These vortices are stationary, usually occur in a quiet or low-noise environment, and are often induced by surface roughness [10]. The cross-flow vortices disappear suddenly at θ < 135 • , which is due to different wind-tunnel runs, as said in [22]. So, it is clear that a large portion of the cone surface is governed by the cross-flow transition, as aforementioned. In a rather narrow range around θ = 180 • , the transition onset is significantly earlier, along with a heat flux valley at θ = 180 • , which is apparently due to the counterrotating streamwise vortices. As shown in Fig. 3, the SED-SL simulation reproduces with many details the elaborate distribution of St on the cone surface, but does not present the standing cross-flow vortices on the lateral side of the cone.
The reason deserves further discussions. The current model is based on describing the dilation symmetry property of boundary layers, which emerges as a result of the selforganization of a set of turbulent eddies. In the case of a flow dominated by distinct vortices, which often occurs when the disturbances are slight or in a particular form, the flow evolves dynamically with usually discrete characteristic lengths, rather than statistically with continuously-distributed length scales (such that the power law is established). Therefore, it will be more difficult, if not possible, for the SED-SL to describe the flows whose length scales are not fully agitated, e.g. in the early stages of transition in some cases. In the hypersonic transition, such flows could be the stationary cross-flow vortices, or particular development of the Mack second mode that may lead to an additional peak heat flux before transition [27,28]. More quantitative comparisons are conducted by plotting the SED-SL computed St profiles along different meridian lines together with the corresponding measured ones acquired from the experimental contour (i.e. Fig. 3a), which are shown in Fig. 4 for the windward, cross-flow and lee sides, respectively. For all the circumferential angles, the simulated profiles agree excellently with the measured ones over the whole hypersonic TrBLs. The relative errors of the computed transition onset location and peak heat flux are within a few percent, which are prominent if one recalls the current status of the hypersonic transition models (Note that the SED-SL in the current study has not been a transition prediction model as the others).
Let us discuss more about the transition overshoot. Transition overshoot widely occurs in by-pass transitions with strong environmental disturbances. It leads to significant increase of peak surface heat flux in hypersonic flows that affects the thermal protection design. A clear understanding of the overshoot phenomenon is not yet established, which results in the failure of most RANS-based transition models in predicting the peak heat transfer. Qin et al. [29] have used an algebraic intermittency factor to accelerate the development of TBL in the late transition region, such that the overshoot phenomenon is reproduced with a reasonable degree of accuracy. However, Qin et al. 's method is based on the perspective of transition model construction, and its applicability to complex configurations requires further investigations [13]. In the present approach, the transition overshoot is characterized by a single parameter, β l , which, if Fig. 4 Comparisons between the St distributions along the meridian lines measured in experiment [22] and computed by the SED-SL model for a the windward surface, b the lateral surface, and c the leeward surface of the GAC-6 case. The data are vertically translated to be clearly displayed larger than unity, introduces a streamwise transition layer in between the laminar and full-turbulent flows. In this layer, eddies stimulated upstream by intense disturbances become highly developed, leading to a violent organization that requires a considerable length (quantified by β l ) to relax eventually to a conventional TBL. This streamwise transition layer mimics the buffer layer (in which the eddies are strong) in the wall-normal direction of TBL, which exists in between the viscous sublayer and the full-developed log-layer and bulk flow [30]. From this point of view, the transition overshoot is a normal phenomenon, and the robustness of β l in depicting various by-pass transitions is understandable.
Now we report the streamwise development of the hypersonic TrBL on the cone surface revealed by the three physical parameters of the SED-SL model. Figure 5 shows the variations of + 0 and y + buf along different meridian lines for the GAC-6 case. y + buf denotes the thickness of the buffer layer, i.e. the location of the near-wall coherent structures, which has a crucial impact on the shape of the mean velocity profile, but is difficult to measure in experiments on high-speed boundary layers. In the current setting of the SED-SL model, y + buf has a streamwise two-layer development that depends only on x * (see Eq. (12) and Fig. 5b), which can be seen as a first-order approximation. With DNS and possible experimental profile data, the two-layer model of y + buf can be assessed and refined in the future.
On the other hand, + 0 is more directly relevant to the skin friction and surface heat flux distributions on the cone surface because it determines the magnitude of the stress length and thus the levels of the eddy viscosity and eddy conductivity. As shown in Fig. 5a, + 0 possesses a three-layer streamwise evolution, especially for the cross-flow and leeward sides of the cone (on the lee side θ = 174 • is selected to avoid the heat flux valley), where about 25% overshoot can be found for + 0 compared with the TBL value. At the windward center, the transition overshoot is not apparent as aforementioned. From Fig. 5 one finds that, although the transition mechanisms are very different in the windward, cross-flow and leeward sides of the cone, the boundary layer development follows a similar way, with only quantitative difference on + 0 . Figures 6 and 7 show the streamwise evolvements of the Reynolds shear stress and streamwise mean velocity profiles during the transition process of case GAC-0. At the transition onset the mean velocity is of a laminar profile (Fig. 7) and the Reynolds shear stress is very small (Fig. 6). After that the boundary layer evolves rapidly from a wall-normally Blasius structure to a four-layer TBL structure, as displayed in Fig. 7. A detailed observation of the variation of the mean velocity profile indicates that the nearwall viscous sublayer and buffer layer form before the outer layer, which is reasonable. This streamwise development is currently described by a growth of + 0 and y + buf only, as shown in Fig. 5. Although the true establishment of the TrBL may be more complicated, the current proposal is tested to be a good approximation, as comparisons show. At the location with the peak heat flux, the transition has almost been completed: the van Driest-transformed mean velocity profile is close to a standard four-layer TBL shape, and collapses in the log-layer with the logarithmic law of incompressible TBL (Fig. 7). Moreover, the Reynolds shear stress profiles become invariant in the inner region in wall unit, and in the outer region with the outer coordinate, both agreeing with the DNS of Pirozzoli et al. for a flat-plate TBL at Ma = 2.25 [31] (Fig. 6), showing the similarity of the flatplate TBL and the cone TBL (at zero incidence). Therefore, the Re-and Ma-invariance properties of the Reynolds stress in the TBL regime are correctly captured by the SED-SL model.
Simulation of the CARDC experiment
The measurement conducted by Chen et al. in CARDC [23] is a very recent experimental study about the hypersonic transitional flow on a sharp cone. The data are of high accuracy, high resolution, and high repeatability. However, only T w data are provided, and [33] the measurement is restricted to the windward and leeward sides of the cone, making the cross-flow regime inaccurate owing to the large angle of view. Here, we present the computation of the CARDC cases with the SED-SL model.
The settings of the model parameters for the CARDC cases are shown in Fig. 1. Because T w is closely correlated with the surface heat flux as revealed by Fig. 2, we directly compare the simulated St with the rescaled T w , in which the proportional factor is determined by making the laminar data collapse together. Figure 8 compares the T w contours with the SED-SL computed St contours for all the four CARDC cases. For the cones with nonzero AoA, the windward and leeward surfaces are compared separately. The agreement between the simulation and measurement is excellent throughout the cone surface and for all test cases. Therefore, the current model demonstrates a high accuracy and a wide adaptivity in describing various hypersonic TrBLs.
Further comparisons on profiles are performed in Figs. 9, 10, 11 and 12 for each CARDC case, which confirm the agreements of the contours. In Fig. 9, specifically, the St profile along the cone axial direction computed by the SED-SL is compared with the rescaled T w profile measured in the experiment for the CARDC-0 case. Also plotted in Fig. 9 is the GAC-0 case, to evaluate the similarity and discrepancy between the two experiments. One finds that the SED-SL model accurately describes the surface heat transfer properties of both TrBLs. A major difference between the two flows is the transition onset location, which is earlier in the CARDC-0 case. Besides the small discrepancy on Ma, two reasons might contribute to this difference. First, the nose-tip radii of the GAC cone is much larger than that of the CARDC one, leading to a postponed transition onset for the GAC-0 case according to [5]. Second, the free-stream noise level characterized by Tu = p rms /p is 3.9% for the CARDC-0 case according to [23], and that of the GAC-0 case is about 2%, which also results in an earlier transition for the CARDC-0 flow [34]. Other than the transition onsets, the two transitional flows are almost identical in the laminar and fully turbulent regimes, which is captured by the SED-SL model.
Discussion and conclusion
In this work the HBL transition on a slender cone at moderate incidence has been studied via a symmetry-based length model: the SED-SL model. It is shown that, by properly setting the multi-regime structure of the TrBL through only three parameters (i.e. a transition center, an after-transition near-wall eddy length, and a transition width quantifying transition overshoot), the SED-SL correctly reproduces the heat flux distribution on the whole cone surface (from windward to leeward side), agreeing closely with multiple sets of wind tunnel data. The results demonstrate the validity of the SED theory regarding the multi-regime similarity across a non-equilibrium transitional process. The success indicates that a universal feature of the TrBL is captured, which is an organizational principle for TBL (due to wall dilation symmetry), and then yields a simple algebraic model for transitional HBLs.
The current SED-SL model has three distinct features compared to popular RANS models: its simplicity, accuracy and transparency for its parameter interpretation. The goal of this paper is not yet to conclude a matured transition model for industrial application, but to validate the symmetry-based description of the transitional HBLs. The current success indicates that, with the accumulation of validated flow cases, a practical transition model will emerge for computing a considerably large number of flows, with both high prediction accuracy and wide adaptivity.
Note that the "similarity structure" proposed by the SED theory is statistical, and is distinct from those highly unstable patterns such as instantaneous vortex structures or wave-like perturbations (which have been the targets in the stability theory and coherent structure studies). But we assert that only the similarity structure is stable and can be directly related to mean flow properties (such as skin friction), so as to be relevant to engineers' prediction ability. So, the present results further support the assertion that the most important task in the basic study of engineering flows is to characterize and quantify this similarity structure. For future perspective, it would be intriguing to investigate the dependence of the transition-relevant multi-regime structure parameters on various flow parameters. For example, our preliminary study shows that ∞ 0 decreases with increasing Ma, which is sound as compressibility yields a "compression" of eddy size globally, hence a smaller eddy-viscosity. Another crucial transition parameter is the transition center x * , which defines the transition location that is of utmost importance in aerospace engineering. New correlation relationships between Re x * and multiple transition influential factors have to be established for the current SED-SL model to reach full predictability, which we will demonstrate to be feasible in a future communication, since we have captured the right similarity structure. | 8,814.4 | 2022-07-08T00:00:00.000 | [
"Physics"
] |
STUDY ON THE RELATIONSHIP BETWEEN STRUCTURE AND MOISTURIZING PERFORMANCE OF SEAMLESS KNITTED FABRICS OF PROTEIN FIBERS FOR AUTUMN AND WINTER
: With dry weather and low humidity in autumn and winter, human skin is usually prone to dryness, even leading to itchy skin and other problems, so it is important to do a good job of skin moisturizing. The fabrics produced by using protein fi ber have a certain moisturizing effect on human skin. Five types of yarn including collagen fi ber, cheese protein fi ber, silkworm pupa protein fi ber, cashmere protein fi ber, and viscose are chosen as veil materials, while nylon/spandex composite fi ber is chosen as inner yarn material. Three fabric structures including the weft plain stitch, 1 + 1 mock rib, and 1 + 3 mock rib are selected for the study. Based on the full factorial experimental design method, 15 samples of seamless knitted fabrics are produced using a circular knitting machine. In order to investigate the effects of different veil materials and fabric structure of seamless knitted fabric on skin moisturizing performance, the skin moisture content test and the trans - epidermal water loss test were carried out before and after the fabric samples were wrapped around the skin of 20 participants. The results show that both the veil materials and the fabric structure have signi fi cant effects on the skin moisture content. The use of collagen yarn as the veil material and 1 + 1 mock rib as the fabric structure results in better moisturizing effects on human skin. In terms of the trans - epidermal water loss test, the fabric structure has signi fi cant effects on the results, while the veil material has no signi fi cant effect on it. However, the value of trans - epidermal water loss of the fabric with protein yarn is smaller than that of the fabric with ordinary viscose. Therefore, using cheese protein yarn as the veil material and 1 + 1 mock rib as the fabric structure results in a smaller trans - epidermal water loss value.
Introduction
Skincare textiles contain substances that are released over time on the human skin [1] and have certain skincare or cosmetic effects [2].Generally, the textiles that can be used for skin care and moisturizing must ensure that they are natural, healthy, and non-irritating to the skin.Therefore, the raw materials of skincare textiles should be sourced from pure natural raw materials or extractions of natural raw materials as far as possible, so as to ensure that they are mild to the skin and safe for the human body and the natural environment [3].
Protein fiber has a shorter production cycle than chemical fiber.It is recyclable, environmentally friendly, and also can reduce energy consumption.Because of the preparation process through blending modification technology, protein fiber has some excellent performance characteristics that other natural fibers do not have.Therefore, it is also widely used in the field of textile and clothing.
Nowadays, a series of moisturizing fiber and textiles with cosmetic and skincare functions have been developed [4,5], contributing to a certain richness and innovation of textile industry products.Some researchers found that vitamin E-textiles have moisturizing effects on the skin.The reason was that when proteases were destroyed, the skin tissue in contact with the fabric would produce a large amount of active ingredients, which could allow vitamin E to enter the skin and exert its effects [6].
Therefore, it is of great application value to design and develop protein seamless knitting products with a good moisturizing effect by combining seamless knitting technology with protein fiber.This study is based on the moisturizing effect of protein fiber on human skin, analyzes the influence of the fabric structure and raw materials of protein fiber seamless knitted fabrics on the moisturizing performance of human skin, and proposes a reference theoretical basis and test method of the moisturizing performance of protein fiber seamless knitted products.
Selection of yarn scheme
This study mainly researches the effect of protein fiber seamless knitted fabric on the moisturizing performance of human skin.11.81 tex (50 s) collagen yarn, 11.81 tex (50 s) cheese protein yarn, 11.81 tex (50 s) silkworm pupa protein yarn, 11.81 tex (50 s) cashmere protein yarn, and 11.81 tex (50 s) viscose yarn are chosen as veil materials, while 2.22 tex/ 3.33 tex (20D/30D) nylon/spandex composite yarn is chosen as the inner yarn material, and the front coil as the face yarn and the back coil as the inner yarn.The specific face yarn specifications are shown in Table 1.
The cheese protein yarn, silkworm pupa protein yarn, and cashmere protein yarn were purchased from Sichuan Yibin Huimei Fiber New Material Co., Ltd, collagen yarn was purchased from Qingdao Bonte Fiber Co., LTD., and viscose yarn and nylon/spandex composite yarn were purchased from Yiwu Huading Nylon Co., Ltd.The four protein fiber raw materials are obtained by adding extracted protein solutions to the spinning solution and then spinning.Due to the limited yarn available in the market, the four types of protein yarn selected in this study are blended with protein fibers and other types of fibers.The blending ratios of collagen yarn and silkworm pupa protein yarn were 70% viscose + 30% collagen fiber and 70% viscose + 30% silkworm pupa protein fiber.The blending ratio of cheese protein yarn is 40% Lanzing Modal + 30% acrylic + 30% cheese protein fiber, in which Lanzing Modal fiber is a cellulose regenerated fiber with high moisture modulus viscose fiber.The blending ratio of cashmere protein yarn is 40% Jasel + 30% acrylic + 30% cashmere protein fiber, in which Jasel fiber is a viscose staple fiber.The blending proportion of protein fiber in the four protein yarns is 30%.
Fabric structure design
Fabric structure design has a certain impact on the appearance and performance of seamless knitted fabrics.This study mainly focuses on seamless knitted products for autumn and winter.Therefore, based on the actual production characteristics and the wearing needs of winter knitted fabrics, three commonly used structures of seamless knitted products in autumn and winter were selected, namely weft plain stitch, 1 + 1 mock rib, and 1 + 3 mock rib [7].
Establishment of sample scheme
In this study, the veil raw materials and fabric structure are taken as experimental factors.There are five levels of raw materials, namely collagen yarn, cheese protein yarn, silkworm pupa protein yarn, cashmere protein yarn and viscose yarn.There are three levels of fabric structure, namely weft plain stitch, 1 + 1 mock rib stitch and 1 + 3 mock rib stitch.The specific factor levels are shown in Table 2.The full-factor experimental design method is used to design the sample scheme, so as to find the performance differences of protein fiber seamless knitted fabrics with different raw materials and fiber structure of veil yarn.The specific sample scheme is shown in Table 3.
The 15 fabric samples in this study were woven by Saint-Tony SM8-TOP2 MP seamless circular knitting machine, as shown in Figure 1.This machine has an 8-loop system, each loop system has eight nozzles.During weaving, each loop system threads a veil and an inner yarn, with a ratio of 1:1 between the veil and the inner yarn.All seamless knitted samples were woven with the same weaving parameters: machine number, 28 stitches/inch; barrel diameter, 14 inches; needle number, 1,248 stitches; and the height position of the loop triangle was the same.Due to the influence of factors such as the raw material and organizational structure of the veil, the density, square meter weight, and thickness of each sample may vary after being removed from the machine.
The horizontal and longitudinal density test results of 15 seamless knitted fabrics in this article are shown in Table 4.Among them, horizontal density refers to the number of coils within 5 cm along the horizontal direction of the knitted fabric, and Viscose yarn longitudinal density refers to the number of coils within 5 cm along the longitudinal direction of the loops.According to the experimental standards and steps, 15 seamless knitted fabrics were tested and calculated, and the results of the square meter weight and thickness tests are shown in Table 5.
Skin moisture content test
The skin moisture content in this study refers to the water content of the skin stratum corneum.The capacitance test for testing skin moisture content is used in this study.The principle of the capacitance test is designed based on the significant difference between the dielectric constant of water and other substances.The dielectric constant of water is 81, while the dielectric constant of other tissue components in the skin is less than 7 [8].As a capacitor, the Corneometer CM 825 skin moisture test probe measures different dielectric constants when the probe comes into contact with different substances.Thus, the change of water content of the stratum corneum can be measured and calculated.The advantage of the capacitance test is that there is no unnatural contact between the tested skin and the probe, the skin will not be harmed, the test process is not affected by external electric fields, and the test results are more accurate.In addition, the water content of the stratum corneum of the skin can be measured quickly because the probe and the moisture in the skin establish a balance process very quickly.
Experimental instruments: MPA6 multi-probe test system and Corneometer CM 825 skin moisture test probe, as shown in Figure 2.
Reference standard: QB/T 4256-2011 "Guidelines for Evaluation of Moisturizing Effect of Cosmetics." Test sample: The reasonable size of the sample size is 25 cm in length and 20 cm in width.Velcro tapes with a length of 20 cm and a width of 4 cm are set on both sides of the sample for convenient measurement, which can not only ensure that the sample completely wraps around the forearm but also adjust according to the circumference of the forearm of different participants.The specification of the fabric sample is shown in Figure 3(a).The test area is 5 cm from the inner forearm to the palm, with an area is 3 cm × 3 cm.The marking of the test area is shown in Figure 3(b), and the coating state of the sample is shown in Figure 3(c).
Conditions for participants: Participants are non-highly sensitive and have no immune system defects or autoimmune diseases.The test site cannot accept skin treatments that may cause inaccurate test results.They have not used hormone drugs or immunosuppressants within the past month.
Test environment: Room with ambient temperature of 20-22°C and humidity of 40-60% in line with test standards.
Pre experiment: After the fabric contacts the skin for a period of time, the moisturizing effect brought by the fabric will no longer change the skin moisture content, so it is necessary to conduct a pre-experiment.The instrument was calibrated according to the instructions, and the test was performed after making sure that the instrument is in normal condition.The time gradient set in this experiment was to test skin moisture content 30 min, 1 h, and 2 h after the fabric sample was covered.The results show that the test results of the sample 1 h after wrapping around the forearm are basically consistent with those of the sample 2 h, indicating that the skin moisture content reaches a balanced state after 1 h of wrapping around the forearm.Therefore, it is reasonable to determine the sample covering time of 1 h.
Formal test steps: After the participants sit for 30 min, the initial value T 0 is tested, as shown in Figure 4.During the experiment, different locations in the test area are selected to test five times and take the average value.After the initial value T 0 is tested, the fabric sample is wrapped around the forearm, and the above formal test steps are repeated for testing to obtain T 1 .
Trans-epidermal water loss test
The higher the value of trans-epidermal water loss, the more water is lost through the skin per unit time, and the more serious the damage to the skin's stratum corneum function.Conversely, it means that the stratum corneum gradually recovers [9].Therefore, trans-epidermal water loss is an indispensable parameter.
The relatively stable testing environment formed by the Tewameter TM Hex skin moisture loss test probe on the skin surface is due to the specially designed cylindrical cavity with open ends, which measures the trace water evaporation on the skin surface through temperature and humidity sensors, and then obtains the amount of water lost through the epidermis.
Experimental instruments: MPA6 multi-probe test system and the Tewameter TM Hex skin moisture loss test probe, as shown in Figure 5.
Reference standard: QB/T 4256-2011 "Guidelines for Evaluation of Moisturizing Effect of Cosmetics." Test sample: Same as the skin moisture content test.
Conditions for participants: Same as the skin moisture content test.
Test environment: Same as the skin moisture content test.
Test steps: After completing the skin moisture content test, in order to avoid the loss of water in the skin when the forearm is exposed to the air after the sample is taken off, a trans-epidermal water loss test should be conducted immediately.During the test, the probe and the participant's forearm are naturally fitted at 90°, and the probe should not be excessively pressed, as shown in Figure 6.
Test results and analysis of skin moisture content
The skin moisture content corresponding to 15 samples was calculated according to formula (1), and the test results of the skin moisture content change rate were obtained by taking the average value.
Change rate of skin moisture content (%):
R T T T
where R is the change rate of skin moisture content in the test area covered by the sample, %; T 1 is the skin moisture content of the test area after 1 h of sample covering, %; T 0 is the skin The results of the average skin moisture content change rate of the participants are shown in Table 6.
The data measured through SPSS were consistent with normal distribution, and further exploration was conducted on the relationship between the veil raw materials and fabric structure and the influence of fabrics on the skin moisture content change rate using two-factor ANOVA.The variance analysis results of the skin moisture content change rate of the fabric are shown in Table 7.It can be seen from the table that these two factors have significant effects on the skin moisture content change rate, in which the change of the veil raw material has a greater impact on the skin moisture content change rate.It has statistical significance.
Duncan method was used to further compare the differences between different levels within veil raw materials and fabric structure.The veil raw materials and fabric structure were set as fixed factors, and the skin moisture content change rate was set as the dependent variable.The horizontal differences among subsets were analyzed in detail through the results of different subsets output by the Duncan method.The results are shown in Tables 8 and 9.It can be seen from these tables that: (1) In terms of veil raw materials, collagen fiber and silkworm pupa protein fiber occupy the same subset, indicating that there is no significant difference between the two types of veils; Silkworm pupa protein fiber, cashmere protein fiber, and cheese protein fiber account for a subset, indicating that there is no significant difference among them; viscose accounts for a subset alone, indicating that viscose fabric and the other four kinds of protein yarn fabrics have significant differences in the influence of skin moisture content change rate.The skin moisturizing effect of four kinds of protein yarn fabrics is obviously better than that of viscose fabric.By analyzing the blending ratios of different fibers in the veil raw materials, it can be seen that the fiber blending ratios of collagen yarn and silkworm pupa protein yarn are 70% viscose and 30% protein fiber.The skin moisturizing effect of these two fabrics is better than that of ordinary viscose fabric, indicating that protein fiber plays a certain role in skin moisturizing.The blending ratios of cashmere protein yarn and cheese protein yarn both contain 30% acrylic fiber, which have poor moisture absorption.However, the skin moisturizing effect of these two fabrics is also better than that of ordinary viscose fabric, indicating that the addition of protein fiber improves the skin moisturizing performance of the fabric.
(2) In terms of fabric structure, 1 + 3 mock rib and weft plain stitch occupy the same subset, indicating that the difference between them is not significant; 1 + 1 mock rib accounts for a subset alone, indicating that 1 + 1 mock rib and the other two kinds of fabric structure have obvious differences in moisturizing performance.Although the structural characteristics of 1 + 1 mock rib and 1 + 3 mock rib make the fabric thicker and have obvious lines on the surface, the thickness of 1 + 1 mock rib fabric is smaller than 1 + 3 mock rib, so the protein fiber in the fabric has a larger contact area with the skin.Therefore, the moisturizing effect of 1 + 1 mock rib fabric is better than 1 + 3 mock rib fabric.Compared to weft knitted fabric, 1 + 1 mock rib fabric is thicker and warmer, which prevents the loss of moisture on the surface of the human skin and also provides a moisturizing effect.In summary, 1 + 1 mock rib has the highest value of skin moisture content change rate and the best moisturizing effect.
The marginal average value is an estimate calculated based on the sample situation to compare the average values of each level when controlling for the influence caused by other factors under the premise of existing models [10].The contour diagram of the estimated marginal average value of the skin moisture content change rate is shown in Figure 7.According to the variance analysis of the skin moisture content change rate mentioned earlier, both the veil raw material and fabric structure have a significant influence on the skin moisture content change rate.It can be seen from the figure that when the independent variable is the veil raw material, there is a significant difference in the estimated marginal value of different fabric structures.The value of 1 + 1 mock rib fabric is the highest, which has the best skin moisturizing effect.The following is the fabric with weft plain stitch and 1 + 3 mock rib, the difference is not obvious, and the effect of the fabric sample is poor.When the independent variable is the fabric structure, there is also a significant difference in the estimated marginal value of different veil raw materials.The value of collagen yarn fabric is the highest, which has the best skin moisturizing effect, followed by silkworm pupa protein yarn fabric, cashmere protein yarn fabric, and cheese protein yarn fabric, and viscose yarn fabric has the worst skin moisturizing effect.
Test results and analysis of trans-epidermal water loss
The change rate of trans-epidermal water loss corresponding to 15 samples was calculated according to formula (2), and the test results of trans-epidermal water loss change rate were obtained by taking the average value.
Change rate of trans-epidermal water loss (%): where R is the change rate of trans-epidermal water loss in the test area covered by the sample, %; T 1 is the trans-epidermal water loss of the test area after 1 h of sample coating, %; T 0 is the transepidermal water loss of the test area before the sample coating, %.
The results of the average trans-epidermal water loss change rate of the participants are shown in Table 10.The data measured through SPSS were consistent with normal distribution, and further exploration was conducted on the relationship between the veil raw materials and fabric structure and the influence of fabrics on the trans-epidermal water loss change rate corneum using two-factor ANOVA.The variance analysis results of the change rate of trans-epidermal water loss of the fabric are shown in Table 11.It can be seen from the table that the raw materials of veil have no significant effect on the trans-epidermal water loss rate of skin, while the fabric structure has a significant effect on the trans-epidermal water loss change rate of skin.
In order to compare the differences among different levels of fabric structure, the fabric structure was set as a fixed factor, and the change rate of trans-epidermal water loss was set as a dependent variable.Duncan method was used to further analyze the differences among different subsets, and the results are shown in Table 12.It can be seen from the table that the weft plain stitch occupies a subset alone, and the 1 + 3 mock rib and 1 + 1 mock rib occupy the same subset, indicating that the 1 + 1 mock rib has a better effect on reducing the water loss through the epidermis.
The contour diagram of the estimated marginal average value of the change rate of trans-epidermal water loss is shown in Figure 8.According to the variance analysis of the change rate of trans-epidermal water loss mentioned above, fabric structure has a significant influence on the change rate of trans-epidermal water loss.It can be seen from the figure that when the independent variable is the veil raw material, there is a significant difference in the estimated marginal value of different fabric structures.The value of 1 + 1 mock rib fabric is the highest, which has the best effect of the change rate of trans-epidermal water loss.Followed by the 1 + 3 mock rib fabric, the estimated marginal value of the weft plain stitch fabric is the lowest, and the moisture effect of the fabric sample is poor.When the independent variable is the fabric structure, it can be seen from the diagram that the estimated marginal average of the change rate of trans-epidermal water loss of the cheese protein yarn fabric is the highest, indicating that the effect of reducing skin epidermal water loss is the most obvious after coating the sample.It means that the fabric has a relatively good moisturizing effect.The second is silkworm pupa protein yarn fabric, and cashmere protein yarn fabric and collagen protein yarn fabric are close to each other, indicating that the effect of the fabric on the trans-epidermal water loss after covering the arm is not significant.The change of viscose yarn fabric is the least obvious, and the moisture retention of the fabric is relatively the worst.
Conclusions
In this study, 15 kinds of knitted samples were tested for skin moisture content and trans-epidermal water loss in 20 participants.The change rates of skin moisture content and trans-epidermal water loss were obtained according to the calculation.Based on the test results, it was analyzed as follows: In the skin moisture content change rate test: (1) In terms of veil raw materials, the order of influence of fabric samples on the moisturizing performance of skin is collagen protein yarn fabric > cheese protein yarn fabric > silkworm pupa protein yarn fabric > cashmere protein yarn fabric > viscose fabric.
(2) In terms of fabric structure, the order of influence of fabric samples on the moisturizing performance of skin is 1 + 1 mock rib fabric > weft plain stitch fabric > 1 + 3 mock rib fabric.
(3) Overall, when the fabric structure uses 1 + 1 mock rib, the veil raw material uses collagen protein yarn, and skin moisture change rate is the highest, which has the best moisturizing effect on human skin.
In the test of trans-epidermal water loss change rate, the veil material used in the fabric sample had no significant effect on trans-epidermal water loss, possibly because the protein yarn fabric could not significantly change the trans-epidermal water loss in a short period of time, and the skin barrier did not change significantly due to the coating of the protein yarn fabric sample.It may also be because the content of protein fiber in the fabric is not high enough, and the moisture content in the yarn cannot cause changes in the index.The change rate of trans-epidermal water loss was significantly affected by fabric structure, and the change rate of trans-epidermal water loss of 1 + 1 mock rib fabric was the highest, indicating that 1 + 1 mock rib fabric played a good role in reducing skin moisture loss.However, the change rate of trans-epidermal water loss after covering the sample of protein yarn is smaller than that after covering the sample of viscose fabric, indicating that the moisture retention ability of protein yarn fabric is better than that of ordinary viscose fabric.
Figure 4 .
Figure 4. Test status diagram of testing skin moisture content.
Figure 5 .
Figure 5. MPA6 multi-probe test system and the Tewameter TM Hex skin moisture loss test probe.
Figure 6 .
Figure 6.Test status diagram of testing trans-epidermal water loss.
Figure 7 .
Figure 7.Estimated marginal average of skin moisture content change rate: (a) independent variable is veil materials, (b) independent variable is fabric structure.
Figure 8 .
Figure 8.Estimated marginal average of the change rate of trans-epidermal water loss: (a) independent variable is veil materials and (b) independent variable is fabric structure.
Table 1 .
Face yarn raw materials and specifications
Table 2 .
Factor level sheet
Table 4 .
Horizontal and longitudinal density
Table 5 .
Square meter weight and thickness Figure 2. MPA6 multi-probe test system and Corneometer CM 825 skin moisture test probe.
Table 6 .
Skin moisture content change rate
Table 7 .
Variance analysis of skin moisture content change rate
Table 8 .
Multiple comparisons of different levels of veil types
Table 9 .
Multiple comparisons of different levels of fabric structure
Table 10 .
Change rate of trans-epidermal water loss
Table 11 .
Variance analysis of change rate of trans-epidermal water loss
Table 12 .
Multiple comparisons of different levels of fabric structure | 5,983.6 | 2024-01-01T00:00:00.000 | [
"Materials Science"
] |
Development and validation of a pyroptosis-related genes signature for risk stratification in gliomas
Background: Glioma is a highly heterogeneous disease, causing the prognostic prediction a challenge. Pyroptosis, a programmed cell death mediated by gasdermin (GSDM), is characterized by cell swelling and the release of inflammatory factors. Pyroptosis occurs in several types of tumor cells, including gliomas. However, the value of pyroptosis-related genes (PRGs) in the prognosis of glioma remains to be further clarified. Methods: In this study, mRNA expression profiles and clinical data of glioma patients were acquired from TCGA and CGGA databases, and one hundred and eighteen PRGs were obtained from the Molecular Signatures Database and GeneCards. Then, consensus clustering analysis was performed to cluster glioma patients. The least absolute shrinkage and selection operator (LASSO) Cox regression model was used to establish a polygenic signature. Functional verification of the pyroptosis-related gene GSDMD was achieved by gene knockdown and western blotting. Moreover, the immune infiltration status between two different risk groups were analyzed through the “gsva” R package. Results: Our results demonstrated that the majority of PRGs (82.2%) were differentially expressed between lower-grade gliomas (LGG) and glioblastoma (GBM) in the TCGA cohort. In univariate Cox regression analysis, eighty-three PRGs were shown to be associated with overall survival (OS). A five-gene signature was constructed to divide patients into two risk groups. Compared with patients in the low-risk group, patients in the high-risk group had obviously shorter OS (p < 0.001). Also, we found that the high-risk group showed a higher infiltrating score of immune cells and immune-related functions. Risk score was an independent predictor of OS (HR > 1, p < 0.001). Furthermore, knockdown of GSDMD decreased the expression of IL-1β and cleaved caspase-1. Conclusion: Our study constructed a new PRGs signature, which can be used to predict the prognosis of glioma patients. Targeting pyroptosis might serve as a potential therapeutic strategy for glioma.
Introduction
Glioma, the most common primary central nervous system (CNS) malignancy, is characterized by extreme heterogeneity, short survival and high recurrence rate (Louis et al., 2016;Zhang et al., 2021). GBM, the most predominant pathological type of glioma, is highly malignant and aggressive, with a median patient survival of only 12-14 months and a 5-year survival rate of less than 10% (Jiang et al., 2016). The main reasons for the poor prognosis of patients include strong tumor cell proliferation and invasion ability, temozolomide chemotherapy resistance, and tumor microenvironment immunosuppression (Heimberger et al., 2008;Perazzoli et al., 2015;Cai et al., 2018;Luoto et al., 2018;Chen et al., 2019). According to statistics, in 2016, there were 330,000 cases of CNS tumors and 227,000 deaths worldwide (GBD, 2016 Brain andOther CNS Cancer Collaborators, 2019). In recent years, research developments on the progression and treatment of gliomas have continued to emerge (Qin et al., 2021;Sun et al., 2021;Zhao et al., 2022). Currently, the clinical treatment strategy for glioma patients is mainly surgical resection, supplemented by concurrent radiotherapy and chemotherapy (Stupp et al., 2008;Jiang et al., 2016;Wang et al., 2020a). Additionally, new treatment methods including molecular targeted therapy and immunotherapy gradually emerged Li et al., 2020a;Meng et al., 2020;Wu et al., 2020). Molecular markers also play an important role in the diagnosis and treatment of gliomas, which can not only serve as targets for drug therapy, but also guide surgical treatment (Li et al., 2020b). However, overcoming the susceptibility for relapse and poor prognosis of glioma patients remains a challenge. The high heterogeneity of gliomas and the limitation of the diagnostic modalities also create challenges for the prognostic evaluation of patients. Consequently, it is necessary to further explore effective prognostic evaluation methods and promising therapeutic targets.
Pyroptosis is a kind of programmed cell death induced by caspases (Vande and Lamkanfi, 2016), which causes cell swelling, cell membrane rupture and intracellular release of proinflammatory substances (Fink and Cookson, 2007). Unlike apoptosis, pyroptosis requires the involvement of the GSDM family as executioners to mediate cell swelling (Feng et al., 2018). With the discovery of the GSDM family, the scope of research on pyroptosis has continued to expand. Pyroptosis can be activated through the following two main approaches: GSDMD-dependent pyroptosis regulated by caspase-1/4/5/11 and GSDME-dependent pyroptosis regulated by caspase-3 (Shi et al., 2015;Liu and Lieberman, 2017;Wang et al., 2017;Xu et al., 2018). Pyroptosis is closely related to a variety of human diseases, especially malignancies. The relationship between pyroptosis and tumors varies with different tissues and genetic backgrounds (Yu et al., 2021). For example, in esophageal squamous cell carcinoma, metformin can activate the pyroptosis process through the mir-497/proline, glutamate and leucine protein-1 pathway . In addition, lncRNA RP1-85F18.6 is highly expressed in colorectal cancer tissues and can inhibit colorectal cancer cell pyroptosis (Ma et al., 2018). In gliomas, knockdown of hsa_circ_0001836 significantly increased the expression of NLRP1, cleaved caspase-1 and GSDMD-N, and induced the pyroptosis of glioma cells . Besides, miRNA-214 can inhibit the proliferation and migration of glioma cells by targeting caspase-1, which is involved in pyroptosis (Jiang et al., 2017). These studies showed that pyroptosis appeared in a wide variety of tumors, including gliomas. However, distinct roles of pyroptosis and PRGs in glioma remain poorly studied, and whether they are related to the prognosis of patients with glioma needs further verification.
In this study, the mRNA expression profiles and detailed clinical data of glioma patients were obtained from public databases (TCGA and CGGA). Subsequently, we performed differentially expressed gene analysis and univariate Cox regression analysis to excavate fifty-eight PRGs. Finally, a five-gene signature was constructed through LASSO regression analysis. We further validated the function of GSDMD, one of the five marker genes, in pyroptosis using gene knockdown and western blotting. Additionally, the single-sample gene set enrichment analysis (ssGSEA) was used to explore immune infiltration status of different risk groups. Our results indicated that PRGs play a crucial biological role in glioma and therefore may be promising prognostic biomarkers and targets for glioma.
2 Materials and methods 2.1 Data collection RNA sequencing and clinical data were obtained by TCGA combined LGG/GBM dataset (n = 467 for LGG and 168 for GBM) retrieved from the UCSC Xena Browser, which is used as a training cohort. Meanwhile, RNA-seq transcriptome data and clinical characteristics of mRNAseq_325, mRNAseq_693 dataset (n = 630 for LGG and 388 for GBM) were obtained from CGGA (http://www.cgga.org.cn), which is used as a validation cohort.
Consensus clustering based on PRGs
According to the PRGs expression, glioma patients were clustered through the ConsensusClusterPlus R package. The Frontiers in Genetics frontiersin.org 02 number of clusters ranges from 2 to 9. We used cumulativedistribution function (CDF), delta area and consensus matrix to determine the optimal number of subtypes. Then, Kaplan-Meier method was applied to compare OS between glioma subtypes.
Construction and validation of a PRGs signature
Samples with complete survival information of TCGA (n = 692) and CGGA (n = 929) were considered to perform the univariate Cox analysis, the false discovery rate (FDR) < 0.05 was used to identify genes that associated with survival. Subsequently, the FDR <0.05 was applied to recognize differentially expressed genes (DEGs) between LGG and GBM. Then, we constructed protein-protein interaction (PPI) network for the prognostic-related DEGs using STRING and Cytoscape software (version 3.9.0). Furthermore, Cytoscape's Cytohubba plug-in combined with Maximal Clique Centrality (MCC) method is used to identify hub nodes. The LASSO L1-penalized Cox regression method was used for variable selection by setting the one thousand simulations in "glmnet" package of R (Tibshirani, 1997;Friedman et al., 2010;Zhou et al., 2019). The risk score, based on the gene expression scores and corresponding regression coefficients, was calculated by the following formula: Risk score = ni = Coefi × xi, where xi represents the normalized expression level of target gene i and Coefi refers to corresponding regression coefficient. According to the median value of the risk score, all patients were further divided into high-risk or low-risk groups. For dimensionality reduction and data visualization, principal component analysis (PCA) was performed using the 'prcomp' function in the STATS package and t-distributed Stochastic Neighbour Embedding (t-SNE) analysis was applied using the Rtsne package, separately.
ssGSEA functional analysis
To explore the immune infiltration status related to the lowrisk and high-risk groups, the ssGSEA in the "gsva" R package was implemented to calculate the infiltration score of sixteen immune cell types and the activity of thirteen immune-related functions (Subramanian et al., 2005;Farshadpour et al., 2012;Vacchelli et al., 2014;Meadows and Zhang, 2015;Vigneron et al., 2015;Hugo et al., 2016). Besides, we analyzed the correlation between the expressions of signature genes and immune infiltrating cells through ssGSEA. Frontiers in Genetics frontiersin.org 03
Cell culture and transfection
Human glioma cells (U87 and LN229) were purchased from the Chinese Academy of Sciences Cell Bank (Shanghai, China). Cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM) or DMEM/F12 with 10% fetal bovine serum (Gibco, United States) under a humidified atmosphere of 5% CO2 at 37°C. These cells were transfected with siRNAs by using riboFECT TM CP (RiboBio, Guangzhou, China). Specifically, 5 × 10 5 cells were seeded in 6-well plates overnight and transfected with siRNA targeting GSDMD (RiboBio, Guangzhou, China). Validation of siRNA was detected by western blotting.
Western blot
Glioma cells were lysed using RIPA buffer (Solarbio) with protease inhibitors, then centrifuged at 13,000 rpm for 30 min at 4°C. Concentrations of total protein were measured with the spectrophotometer (NanoDrop). Protein samples were subjected to sodium dodecyl sulfate polyacrylamide gel (EpiZyme Scientific) electrophoresis and transferred onto PVDF membranes. The PVDF membranes were blocked in a 5% milk-TBST solution and incubated overnight at 4°C with primary antibodies. After incubation with HRP-labeled secondary antibodies at room temperature for 1 h, the protein bands were visualized using a ChemiDocTM MP Imaging System (BioRad).
Statistical analysis
The t-test was utilized to compare gene expression between samples from LGG and GBM. The ssGSEA scores of immune infiltrating cells and immune-related functions between the two risk groups were compared by Mann-Whitney test, and Benjamini & Hochberg method was used to adjust p-value. Kaplan-Meier analysis was used to compare OS between high-risk and low-risk groups. Univariate and multivariate Cox regression analyses were used to determine independent predictors of OS. Data analysis of this study was performed by R software (version 4.0.5). Figure 1 shows the flow chart of our work. The study involved 692 glioma patients in the TCGA cohort and 929 glioma patients in the CGGA cohort. Table 1 summarizes the clinical characteristics of these patients.
Glioma subtypes based on consensus clustering analysis
To explore the prognostic implications of PRGs, we performed consensus clustering analysis with glioma patients in the training cohort. When the clustering variable (k) equaled 2, the empirical CDF plot revealed the lowest rangeability in the consensus index Frontiers in Genetics frontiersin.org range of 0.1-0.9 and the delta area scored highest (Figures 2A, B). Also with k = 2, the consensus matrix plot showed the highest consistency ( Figure 2C). Therefore, glioma patients were divided into two subtypes, namely, cluster 1 and cluster 2. Interestingly, patients in cluster 2 group had a better OS than that in cluster 1 group ( Figure 2D). The clinical characteristics of two clusters were shown in Supplementary Table S2. Detailly, a total of 625 glioma patients were classified into cluster 1 and cluster 2 groups. The differences in WHO grade, IDH status and age between the two glioma subtypes were statistically significant. In contrast, there were no significant differences in 1p/19q and gender between the two subgroups.
Identification of prognostic DEGs in the TCGA and the CGGA cohort
Most of the PRGs (88/107, 82.2%) showed significantly differential expression between LGG and GBM in the TCGA cohort. In univariate Cox regression analysis, eighty-three of these genes (83/107, 77.6%) were associated with OS. Similarly, we analyzed DEGs (75/96, 78.1%) and performed univariate Cox regression analysis (72/96, 75.0%) in the CGGA cohort. A total of fifty-eight PRGs were acquired by taking the intersection of four gene sets from two cohorts ( Figure 3A). Then, we obtained the interaction information of forty-eight proteins from STRING with the comprehensive score ≥0.7 as the screening condition and constructed a PPI network through Cytoscape. Next, the top 10 hub nodes were identified using the Cytoscape's Cytohubba plug-in ( Figure 3B). The information of top 10 prognostic DEGs was presented in Table 2.
Construction of a prognostic signature in the TCGA cohort
A new prognostic signature was established based on the genes corresponding to the top 10 hub nodes mentioned above. The fivegene signature (including CASP4, CASP5, CASP8, GSDMD, and NLRC4) was determined by the optimal value of the regularization parameter λ (Figures 3C, D), which divided patients into high-risk or low-risk groups based on the median cut-off value ( Figure 4A). The formula for calculating the risk score is as follows: Risk score = Figure 4B showed that patients in the high-risk group had a higher mortality rate and shorter survival time compared with patients in Figures 4C, D). As shown in Figure 4E, the OS of patients in high-risk group is markedly shorter than patients in the low-risk group (p < 0.001). The time-dependent ROC showed that the area under the curve (AUC) reached 0.858 at 1 year, 0.878 at 3 years, and 0.825 at 5 years ( Figure 4F), which demonstrated excellent prognostic performance of this signature.
Validation of the prognostic signature in the CGGA cohort
To test the robustness of the signature established in the TCGA cohort, patients in the CGGA cohort were also divided into two risk groups according to the median value calculated by the same formula ( Figure 5A). Similar to the results obtained from the TCGA cohort, patients with a higher risk score had a shorter survival time ( Figure 5B). Also, both PCA and t-SNE analyses showed that patients in different risk groups distributed in discrete directions (Figures 5C, D). Subsequently, patients in the high-risk group showed a markedly worse survival compared with patients in the low-risk group (p < 0.001, Figure 5E). Moreover, the AUC reached 0.672 at 1 year, 0.716 at 3 years, and 0.731 at 5 years ( Figure 5F), validating the robustness of this model.
Independent prognostic value of the PRGs signature
Univariate and multivariate Cox regression analyses were performed to identify whether the risk score is an independent (Figures 6B, D). Furthermore, we constructed a nomogram to predict 1-year, 3year, and 5-year OS of glioma patients using five prognostic factors including age, grade, IDH, 1p/19q and risk score ( Figure 7A). The results of the calibration curve shown in Figures 7B-D showed that the survival rate obtained by the model is consistent with the actual survival rate.
Profiles of signature genes in glioma
Pan-cancer analysis showed that these five signature genes were highly expressed in a variety of tumors including GBM and LGG (Supplementary Figure S1). We also compared the expression differences of these genes in glioma and normal tissues, and the results demonstrated that their expression in gliomas was significantly higher than that in normal tissues. The ROC curve reflected the diagnostic efficiency of these genes (Supplementary Figure S2). Then, we analyzed the relationship between the expression level of each signature gene and the clinicopathological features of glioma patients. These results showed that these five signature genes were highly expressed in high-grade glioma, IDH wild-type, 1p/19q non-codeletion subtype and elderly patients (Figures 8A, B; Supplementary Figure S3). Frontiers in Genetics frontiersin.org GSDMD, as a key effector molecule of cell pyroptosis, is activated to mediate the formation of cell membrane pores, ultimately causing cytokine release and inflammatory cell death (Hu et al., 2020). Figure 8C showed the GSDMD-dependent pyroptosis signaling pathway. After GSDMD knockdown in glioma cells, the expression of IL-1β, cleaved caspase-1 and GSDMD-NT decreased, and the expression level of pro-caspase-1 was not affected ( Figure 8D). We also evaluated the survival of glioma patients using TCGA data and found that the expression level of GSDMD was significantly associated with poor prognosis ( Figure 8E). These results indicated that GSDMD was involved in mediating pyroptosis in gliomas and its expression level had a prognostic value in glioma patients.
Immune infiltration status in the TCGA and the CGGA cohort
Pyroptosis can regulate the immune response through the release of immune stimulatory factors . To explore the immune infiltration status between two risk groups, the enrichment scores of different immune cell types and immunerelated functions were quantified by ssGSEA. In the TCGA cohort, compared with the low-risk group, the high-risk group had a higher scores of immune cells including CD8 + T Cells, iDCs, macrophages, pDCs, Th2 cells, TIL and Treg, etc ( Figure 9A). Notably, the scores of macrophages between two different risk groups have the most significant difference. Additionally, the scores of APC costimulation, CCR, checkpoint molecules, type I IFN response and type II IFN response of the high-risk group were evidently higher than those of the low-risk group (p < 0.001, Figure 9C). The differences of immune infiltration status between two different risk groups have been verified in CGGA (p < 0.001, Figures 9B, D). Additionally, we performed ssGSEA to analyze the correlation between the expressions of five signature genes and immune infiltrating cells. The results of ssGSEA indicated that these five signature genes were closely associated with a variety of immune infiltrating cells (Supplementary Figure S4).
Discussion
Cell death is a fundamental physiological process, which includes the three most widely known patterns, namely, apoptosis, necroptosis, and pyroptosis (Man et al., 2017). Pyroptosis, a new type of programmed and inflammatory death Frontiers in Genetics frontiersin.org found after apoptosis and necroptosis, is involved in the occurrence and progression of multiple tumors (Xia et al., 2019). With the development of research, the relationship between pyroptosis and tumors is increasingly understood and provides inspirations for treatment strategy. Cancer cells, including glioma, have evolved multiple mechanisms to evade programmed cell death in order to maintain their survival (Fulda, 2018). However, the current research on the biological value and roles of pyroptosis in the occurrence and progression of glioma is limited. Pyroptosis is a gasdermins-mediated programmed cell death induced by caspase-1/4/5/11 and has been extensively studied in multiple diseases (Maltez et al., 2015). The GSDM protein family, especially GSDMD and GSDME, are important mediators of pyroptosis (Hsu et al., 2021). In the process of pyroptosis, GSDM protein can be cleaved to release the GSDM-N domain, which can form holes in the cell membrane, causing cytoplasm swelling, membrane rupture, and the release of inflammatory factors into the extracellular environment (Sarhan et al., 2018). Increased inflammation creates a local environment that is conducive to tumorigenesis, and plenty of evidence have indicated that chronic inflammation plays a crucial role in carcinogenesis and tumor progression (Dinarello, 2010;Carmi et al., 2013;Zha et al., 2020). Previously, it is reported that decitabine can enhance caspase-3 cleavage to GSDME and cause cancer cell pyroptosis, indicating that targeting pyroptosis might be a promising therapeutic strategy for cancer . Furthermore, another research suggests that kaempferol inhibits glioma cell proliferation and tumor growth by inducing pyroptosis . Specifically, kaempferol can induce autophagy by increasing reactive oxygen species, and ultimately trigger the pyroptosis of glioma cells. Therefore, it is reasonable to believe the perspective applications of pyroptosis in glioma therapies.
In the present work, one hundred and eighteen genes involved in pyroptosis-related gene sets were obtained from MSigDB and GeneCards. We performed consensus clustering analysis and divided glioma patients into two clusters. Consequently, the OS of cluster 2 was significantly better than that of cluster 1, indicating the prognostic value of PRGs in gliomas. Next, we constructed protein-protein interaction (PPI) network for the screened PRGs and identified the top 10 hub nodes using the Cytoscape's Cytohubba plug-in. Then, we constructed a novel five-gene signature for prognosis prediction of patients with glioma. The risk score of this signature was an independent prognostic factor of glioma and patients in the high-risk group had a worse survival. Calibration plots demonstrated that the nomogram combining the risk score with conventional clinical prognostic factors performed well in predicting survival for glioma patients. A previous study reported a comprehensive analysis for the role of pyroptosis in Frontiers in Genetics frontiersin.org glioma, which constructed a prognostic signature based on fifteen pyroptosis-related genes and analyzed the molecular classification and immunity of glioma . The three genes CASP4, CASP5, and CASP8 contained in our gene signature are consistent with the fifteen-gene signature in the previous study. These two gene signatures validated each other, reflecting the plausibility of this PRGs-based prognostic approach. Currently, there are several methods to predict glioma prognosis, including clinicopathological classification, MRI imaging, and molecular markers (such as IDH mutation status, 1p/19q codeletion, MGMT promoter methylation status, etc.) detection (Sledzinska et al., 2021). IDH mutations are positive prognostic markers in gliomas and have significant prognostic value. Additionally, MGMT promoter methylation has important significance in predicting the response to temozolomide (TMZ) in glioma patients, which indirectly reflects the prognosis of patients. Compared to the above methods, our study focused on the role of pyroptosis in the prognosis of gliomas and constructed a new predictive signature based on PRGs. This prognostic signature was combined with clinical characteristics and provided a more comprehensive and Frontiers in Genetics frontiersin.org individualized approach to prognostic assessment. Although the prognostic signature requires more prospective data to further test its clinical utility, our results suggest that the prognostic signature based on five genes may be a powerful indicator of glioma patient survival.
The novel prognostic signature established in our study contained five PRGs. GSDMD, a member of the gasdermins protein family, is considered to be the executioner of pyroptosis (Broz et al., 2020). The protein contains an inhibitory C-terminal domain and a poreforming N-terminal domain (GSDMD-NT). GSDMD can be cleaved by caspase-1/4/5/11 to expose the N-terminal domain, casuing the formation of membrane pores (Kayagaki et al., 2015;Shi et al., 2015). Caspases are a family of cysteine proteases, which play a key role in the process of inflammation, cell death and disease (Van Opdenbosch and Lamkanfi, 2019). CASP4, CASP5, and CASP8 encode the proteins caspase-4, caspase-5, and caspase-8, respectively.
Caspase-4 and caspase-5 activated by lipopolysaccharide promote GSDMD-mediated pyroptosis, while caspase-8 is mainly involved in triggering death receptor-mediated apoptosis (Downs et al., 2020;Mandal et al., 2020). In glioblastoma, caspase-8 promotes the expression of various cytokines, angiogenesis, and tumorigenesis (Fianco et al., 2018). However, the roles of CASP4 and CASP5 in gliomas has not been reported. NLRC4 is an apoptosisrelated protein that interacts with caspase-1 and induces the activation of inflammasomes (Duncan and Canna, 2018). As a member of the NOD-like receptor family, NLRC4 is recognized by NAIP subfamily proteins and binds to form NAIP-NLRC4 inflammasome (Kay et al., 2020). Jaehoon Lim et al. has previously reported that upregulation of NLRC4 inflammasome is associated with poor prognosis in glioma patients (Lim et al., 2019). In this study, we analyzed the relationship between the expression of these five PRGs and the prognosis and clinicopathological characteristics of glioma patients. The results suggested that the high expression of these genes is associated with poor clinical phenotype and prognosis. Then, we selected GSDMD from these five PRGs for knockdown and found that GSDMD participated in mediating the process of pyroptosis in gliomas. Since there are few reports about PRGs in gliomas, the specific mechanisms of these genes in gliomas require further research.
Recently, pyroptosis has attracted increasing interest due to its role in activating the immune system. Research on the characteristics of pyroptosis and its roles in pathophysiological Comparison of the ssGSEA scores between different risk groups in the TCGA cohort (A, C) and CGGA cohort (B, D) The scores of sixteen immune cell types (A, B) and thirteen immune-related functions (C, D) were displayed in boxplots. (aDC, Activated dendritic cell; iDC, Immature dendritic cell; pDC, Plasmacytoid dendritic cell; Tfh, T follicular helper cell; Th2, T helper 2; TIL, Tumor infiltrating lymphocyte; Treg, Regulatery T Cell; HLA, Human leukocyte antigen; APC, Antigen presenting cell; CCR, Cytokine-cytokine receptor; Adjusted p values were showed as: ns, not significant; ***, p < 0.001.).
Frontiers in Genetics
frontiersin.org conditions has advanced our understanding of inflammation, immune responses, and tumor development (Tan et al., 2021). On the one hand, the release of cytokines produced by pyroptosis changes the immune microenvironment and promotes tumor development through immune evasion. On the other hand, the cytokines produced by pyroptosis can also recruit immune cells and activate the immune system to enhance the effect of tumor immunotherapy . In the present study, we performed ssGSEA to explore the immune infiltration status between two different risk groups. Particularly, patients in high-risk group showed a higher proportion of immune infiltrating cells, including iDCs, pDCs, Th2 cells, TIL, Treg and macrophages. Moreover, the expression levels of these five signature genes were closely related to immune infiltrating cells. A reasonable argument is that pyroptosis of cancer cells activates anti-tumor immunity, which significantly increases the number of immune cells (Wang et al., 2020b). Nevertheless, the underlying mechanism between PRGs and immune status in glioma is still unclear, and further research is needed. Besides, previous studies have suggested that tumorassociated macrophages are involved in poor prognosis of glioma because of their roles in immune suppression and invasion . Therefore, increased macrophage infiltration of patients in high-risk group may be one of the reasons for their poor prognosis.
Conclusion
Our study constructed a novel PRGs signature, which could serve as a powerful tool to predict the prognosis of glioma patients. This signature could be applied to risk stratification of patients with glioma. In addition, the signature was connected with immune infiltration status, providing an improved understanding of immune response in glioma. Our results indicated that PRGs may be promising therapeutic targets of gliomas.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. | 5,989.2 | 2023-02-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Sparse Constrained Least-Squares Reverse Time Migration Based on Kirchhoff Approximation
Least-squares reverse time migration (LSRTM) is powerful for imaging complex geological structures. Most researches are based on Born modeling operator with the assumption of small perturbation. However, studies have shown that LSRTM based on Kirchhoff approximation performs better; in particular, it generates a more explicit reflected subsurface and fits large offset data well. Moreover, minimizing the difference between predicted and observed data in a least-squares sense leads to an average solution with relatively low quality. This study applies L1-norm regularization to LSRTM (L1-LSRTM) based on Kirchhoff approximation to compensate for the shortcomings of conventional LSRTM, which obtains a better reflectivity image and gets the residual and resolution in balance. Several numerical examples demonstrate that our method can effectively mitigate the deficiencies of conventional LSRTM and provide a higher resolution image profile.
INTRODUCTION
Seismic migration is an inverse procedure of forward modeling, which can restore the interior of the earth medium with record data. Specifically, migration attempts to eliminate the effects caused by the process of physical propagation and obtain an image that clearly depicts the structural information of interest. Reverse time migration (RTM), a state-of-the-art seismic imaging method (Baysal et al., 1983;McMechan, 1983), identifies the aforementioned acausal procedure appropriately. Based on two-way wave equation, RTM is powerful for handling complex geological settings and velocity with dramatic variation in the lateral direction. Therefore, it can deal with steep dips and salt dome better than conventional migration (Zhu and Lines, 1998;Yoon et al., 2003;Liu et al., 2010). However, most migration methods, including RTM, use the adjoint operator to compute the image instead of the inverse operator (Tarantola, 1984). Practical data suffers from many factors, such as irregular acquisition geometry and limited aperture of the acquisition system. These deficiencies generate artifacts and degrade the resolution. To overcome these limitations, least-squares migration (LSM) was proposed to combine with RTM (Liu et al., 2011;Dai et al., 2012). Therefore, seismic imaging can be regarded as a linearized inverse problem. With a proper initial velocity model, seismic records can be inverted to a more accurate profile. LSRTM iteratively reduces the residual between predicted data and observed data in a least-squares framework; therefore, the adjoint operator can keep approaching the inverse operator. Many results have indicated that LSRTM has a better performance than conventional RTM and migration (Zhang et al., 2015;Dutta and Schuster, 2014;Liu et al., 2016).
The precondition of seismic inversion is forward modeling, which maps the parameter model to seismic data. There are two main approaches to build linear approximation between physical model and wavefield (Yang and Zhang, 2019). One is the most commonly used Born approximation based on small perturbation (Beylkin, 1985;Bleistein, 1987). This requires that high-order scattered wavefields are much weaker than primary field. The Born operator describes a linear relationship between model perturbation and primary reflected wave. It divides the wavefield into two parts: background wavefield and perturbation wavefield. LSRTM based on Born approximation can achieve model perturbation with these two fields. In addition, an alternative scheme for modeling is Kirchhoff approximation (Bleistein, 1987). Compared with Born modeling, the Kirchhoff operator delineates the connection between primary reflected wave and reflectivity. Different operators lead to distinctive results under these two physical contexts. However, neither Born nor Kirchhoff approximation can avoid the impact on seismic image in a least-squares sense. Because minimizing the L2 norm only provides an average solution (Wang, 2016;Wu et al., 2016). It is essential to seek a balance between the residual and resolution. According to geological recognition, the earth medium usually presents a layered spatial distribution. The reflection coefficient that mirrors strata attributes should be sparsity, that is, the part of model that does not generate reflected wave ought to be zero. Therefore, the inverted model needs a sparse limitation.
This study implements a Kirchhoff modeling formula for LSRTM promoted by sparsity. The reflectivity model should be regularized with L1 norm while minimizing the residual of wavefield in the form of the L2 norm. Referring to 'least absolute shrinkage and selection operator' (Lasso) problem (Tibshirani, 1996), this reformed LSRTM can be solved by the algorithm of spectral projected gradient for L1 minimization (SPGL1), which is designed to solve sparse least squares (van den Berg and Friedlander, 2011). Examples show that our method can effectively overcome the problems mentioned above.
METHOD
RTM has great advantages in imaging steep strctures such as salt dome. However, it suffers from low-frequency noise compared to conventional migration. Least-squares migration can get closer iteratively to the optimal solution and eventually obtain a relatively high signal-to-noise ratio, high resolution and amplitude equalized profile that eliminates the influence of the acquisition system. It contains three steps: constructing a linear modeling problem first, using the forward and backward propagation wavefields to image, and finally updating the physics model according to the residual.
Linear Modeling Operators
The linearization of nonlinear forward problem is essential to seismic inversion, making the physical progress more explicit; moreover, converting the medium parameter becomes easier. The choice of a linear operator will lead to different physical significance and images. It is a common way to use Born approximation to realize linearization. The real velocity model is divided into two parts: background velocity v 0 and velocity perturbation δv. Given a perturbation δv, it generates a corresponding wavefield perturbation δu. The Born operator describes the relationship between reflected wave and model perturbation. Specifically, the incident wave interacting with model perturbation becomes a new source, namely the Huygens principle, and then the new source generates wavefield perturbations. This can be expressed as follows in time domain: where u 0 represents the background field propagating in v 0 , f (t; x s ) is the source signature located at x s and excited at t, the model perturbation is denoted by m(x) 2δv(x)/v 0 (x), which describes velocity changes compared to background velocity.
x is a point in model. This study assumes that the density ρ is a constant (Eq. 1) and (Eq. 2) can be rewritten in form of an integral using Green's theorem: where G 0 (x, t; x s ) is the Green's function from x s to x, G 0 (x g , t; x) propagates from x to x g . Green's function is governed by: where the δ(t; x s ) is Dirac function. Born approximation represents scattered phenomenon caused by model perturbation, which could be a means of linearizing seismic inversion. However, this approximation is accurate when scattered field δu is much weaker than background field u 0 (Schuster, 2017), which is a disadvantage of Born approximation. It cannot describe kinematic and dynamic information of seismic waves well with strong reflector. And studies have shown that Born approximation has limited angle validity and it cannot appropriately predict the reflections generated with large incident angle (Yang and Zhang, 2019).
Compared to the Born operator, the Kirchhoff operator relates the reflectivity to wavefield perturbation. Therefore, it depicts the interaction between the incident field and reflectivity rather than velocity perturbation. There is a relationship between reflectivity and model perturbation when the perturbation and incident angle are small (Stolt and Weglein, 2012): where the r(x, α) is the reflection coefficient at point x with incident angle α between the incidence and the normal line ( Figure 1). This means that we can obtain the wavefield perturbation under the Kirchhoff approximation by substituting (Eq. 6) into (Eq. 4), and we have Here we turn Kirchhoff modeling equation into the same form as Born approximation. Then Eq. 7 can be rewritten as.
It should be noted that the term r(x, α) can be replaced by the generalized angle-dependent reflectivity model to get rid of the limitations of small perturbation and incident angle. Although there are some methods to solve the propagation direction of wave, such as Poynting vector and Plane Wave Decomposition (PWD), it is still tedious and time-consuming to obtain the angle term. Here we give an approximate scheme (Yang and Zhang, 2019).
Each shot can invert a reflectivity image, here we sum the images obtained by all shots. Then, we regard the summation as the final reflectivity model and use it to iterate. Approximately, we can get an averaged reflectivity model by multiple shots stacking. Therefore, we can get the predicted data by using this stacked reflectivity R(x) rather than the angle-dependent term r(x, α) cos(x, α). Note that R(x) is an averaged reflectivity over all illuminated angles.
With this approximate reflectivity R(x), we can express Eq. 9 as In sum, with the relationship of reflectivity and model perturbation, two linear approximations have a similar form, which expresses their common ground. The difference between two approximations is also evident. From Eq. 6, cos α approximately equals to one and can be ignored for a small incident angle. Therefore, reflectivity can be regarded as the spatial derivative of model perturbation. The inverted model after spatial derivation has a higher resolution, that is, the spectrum has been improved. More details are provided in the numerical tests.
Least-Squares With Sparse Optimization
In contrast to full waveform inversion (FWI) (Liu, et al., 2020), LSRTM first establishes a linear relationship between physical model and corresponding response (Tarantola, 1984), then it implements the inverse problem. The least-squares method (LSM) only requires the construction of a migration operator and inverse migration operator, which is conjugated to each other. It can reduce the residual between the observed and predicted data iteratively to approach the optimal solution of the inverse problem gradually. According to the linear approximation above, we can express them in the form of a matrix: d Lm (12) where the d is predicted data, such as background or perturbation fields. L represents modeling operator and m is the physics model. Usually, it is assumed that the background velocity has been obtained in advance, and then the data can be predicted. Hence, the misfit function can be expressed as: The model m, which makes zE(m) zm (the Jacobian matrix) equal to 0, is the optimal solution of Eq. 13. However, the computation of the Jacobian matrix is quite time consuming, particularly for seismic exploration. We adopt the adjoint-state method to calculate the adjoint operator L T of modeling operator L, Specifically, the gradient of E(m) can be obtained by back propagation of the wavefield residual and background field, here we give the gradient based on Kirchhoff approximation (Plessix, 2006;Wang et al., 2021): Where q(x, t; x s ) is the adjoint wavefield governed by: According to Eq. 13, we can obtain a least-squares solution m (L T L) −1 L T d. Note that LSM provides a smooth solution of the model, which is determined by the properties of the L2 norm. As a result, LSM has a limited ability to improve the quality of the image. Here we give a simple model to display the impact of LSM. In this example, we use the Ricker wavelet with a center frequency of 30 Hz and a time sampling interval of 1 ms. With convolution model theory, we can get seismic records via the convolution of Ricker wavelet and reflection coefficients, which can be obtained by d Lm. Conversely, reflection coefficients can be obtained by the deconvolution of seismic records and wavelet, that is, m mig L T d. Figure 2C is the result of deconvolution, and it is hard to identify the reflectors. Compared to deconvolution, Figure 2D shows that LSM improves the resolution obviously. However, many oscillations caused by (L T L) −1 near the real reflection coefficients should not exist. That's why we regard the least-squares solution as a smooth or average solution. The actual model indicates that the medium presents a layered spatial distribution, as shown in Figure 2A or Figure 2B, that is, the subsurfaces are sparse. In Figure 2E, the inversion result with sparse constrained LSM performs quite well, and these oscillations generated by LSM are suppressed; thus, the resolution and sparsity of the reflection coefficient series are improved effectively.
Due to the feasibility and sparse property of L1-norm, we modify the objective function with L1 norm to realize sparse reconstruction of the model in this study. Generally, Eq. 13 can be reformed with two new problems.
1 Basis Pursuit (BP) problem (BP) min m 1 , subject to Lm d Eq. 16 depicts a BP problem that comes from compressed sense theory, and it aims to seek a sparse solution that satisfies Lm d. However, practical seismic data inevitably contain noise, and Eq. 16 can be modified as a basis pursuit denoising (BPDN) problem: where the σ describes the noise level in the data, and Eq. 16 and Eq. 17 are equivalent to each other when σ 0.
Least Absolute Shrinkage and Selection Operator (LASSO) problem
where the τ ≥ 0 is an explicit limitation of sparsity on m. Problems (BP σ ) and (LS τ ) are different descriptions of the same question. They are equivalent in the sense that there exists a solution m * of (BP σ ) for a given σ, and there exists a corresponding τ that makes m * also be a solution of (LS τ ). Frontiers in Earth Science | www.frontiersin.org September 2021 | Volume 9 | Article 731697 Both problems mentioned above can be solved by the algorithm of spectral projected gradient for L1 minimization (SPGL1). Given a constraint τ, we can obtain the residual norm from Eq. 18: Eq. 20 recasts (LS τ ) as a problem of finding the root of a nonlinear equation and defines a continuous curve, the Pareto curve (Figure 3). For a given σ, SPGL1 uses the Newton method to approach the root, and as the τ updates iteratively, the optimal solution m τ σ of problems (BP σ ) and (LS τ ) can be obtained. Therefore, we balance the 2-norm of the residual against the 1-norm of the solution eventually (van den Berg and Friedlander, 2009). From Figure 3, the question is degraded to a simple Lasso problem (LS τ ) when the noise level factor σ is equal to 0. In this study, we set σ 0. Note that synthetic seismic records do not contain noise in general, so we set the noise level factor to be zero. Actually, the algorithm of SPGL1 can deal with noisy data, and we can add some random noise or set some traces to be zero in synthetic data. Besides, the determination of parameter τ is quite important. According to the theoretical model, we can calculate the perturbation model or reflectivity model and make a rough estimate of τ. In general, it is appropriate to set the value of tau to tens of times that of the calculated perturbation model or reflectivity model. Then, the parameter τ can be adjusted according to the inversion results.
Here we summarize the workflow of L1-regularized LSRTM as follows: 1) Obtain the predicted data d 0 with migration velocity v 0 , and get the d res with d res d obs − d 0 ; 2) Set the initial model m 0 and predict the data Lm 0 based on Born or Kirchhoff approximation, therefore we can get the residual r 0 Lm 0 − d res and gradient operator g 0 L T r 0 ; 3) Input the parameters of τ, σ and set k 0; 4) Solve the Lasso problem (LS τ ) with the algorithm of SPGL1, and update the m k , r k and g k until k k max ; 5) Output the result m k max .
NUMERICAL EXAMPLE
In this study, two theoretical models are used to test the validity of the proposed method, including a single diffraction point and complex fault model. Both are based on the two-way acoustic wave equation. Here we use the finite difference method on regular grid.
Single-Diffraction Point
To verify the effectiveness of this method, we first set a simple model with a diffraction point of 2000 m/s embedded in the background velocity of 1,000 m/s (Figure 4), and the entire model has been discretized into 201 × 201 grids in the horizontal and vertical directions, respectively, with the same interval of 5m. The geometry system is arranged as follows: a total of 21 shots are uniformly distributed on the surface of this model with an interval of 50m. Geophones are also placed on each grid point on the surface. We use the Ricker wavelet with a center frequency of 25 Hz for modeling, and the sampling interval is 0.5 ms. In this example, we set 1,000 m/s as the migration velocity.
As shown in Figure 5A, the image is obtained by LSRTM based on Born approximation. This is consistent with the actual situation to a certain degree. The single scatter point is blurred with a disturbing cross pattern (marked by a yellow arrow). However, it should be a dot on the image (Lecomte and Kjeller, 2008). This is because we use the adjoint operator to migrate rather than the inverse operator in Eq. 14. Specifically, Eq. 13 defines a normal equation with L T Lm L T d. The term L T L, Hessian matrix, is equivalent to a blur operator acting on the true image m. Furthermore, L T L includes the influence of irregular acquisition, limited acquisition aperture, band limited source, etc., which generate artifacts and degrade the resolution of the image (Jiang and Zhang, 2019). Compared to LSRTM, the same method in Figure 5C with the L1 constraint performs better; it mitigates the distortion caused by the blur operator. Therefore, with the promotion of sparsity, the resolution in the least-squares method has been improved significantly, and the image looks more like a scatter point.
The LSRTM based on the Kirchhoff operator inverts the reflectivity directly from the seismic records. Figure 5B displays the image produced by Kirchhoff approximation. Compared to Figure 5A, least-squares RTM based on Kirchhoff operator suffers from the same problems. Similarly, we implement the L1 norm on LSRTM, which is shown in Figure 5D. The cross pattern is eliminated clearly, and we Frontiers in Earth Science | www.frontiersin.org September 2021 | Volume 9 | Article 731697 6 obtain an explicit dot rather than a blurred spot. Therefore, for a simple model, the sparsity-promoting LSRTM based on Kirchhoff approximation can effectively improve the resolution of the image. The results calculated by LSRTM in Figures 5A,B
Fault Model
We also test the other relative complex model. In this fault model ( Figure 6A), there are some classical geological structures, including folds, fault blocks, and depressions. Therefore, it appropriately shows the complex structure of near-surface media. The maximum and minimum velocities are 4,000 m/s and 1,500 m/s, respectively. Similarly, we discretize it into 265 × 367 grids with an interval of 5 m. Thus, a total of 25 shots are uniformly located at the surface of this model. The modeling seismic wavelet is same to last experiment.
As shown in Figures 7A,B, images inverted by LSRTM based on two approximations fit the fault model well, and the contact relationship between structures can be clearly depicted. To further improve the resolution of these images, we combined L1 norm regularization with LSRTM to reconstruct the model. From Furthermore, we enlarge the model framed by red rectangle in Figure 6A, which has step-like strata (marked by yellow arrows in Figure 8A). After inverting, the reflectivity image in Figure 8D produced by constrained LSRTM based on Kirchhoff operator agrees with the actual situation. Furthermore, images inverted by two different approximations have different phases. According to Eq. 6, reflectivity can be derived from model perturbation. With the assumption of a small incident angle, cos(x, α) is roughly equivalent to 1 and can be ignored. Then, Eq. 6 can be rewritten as r(x) ikm(x)/2, where the wavenumber k ω c . Therefore, reflectivity r(x) is the spatial derivative of model perturbation m(x). As a result, the image inverted by Kirchhoff performs sparser and sharper, and there is a phase shift of 90°b etween perturbation model and reflectivity model. Figure 9 shows the amplitude spectra of the images in Figures 7B-D, respectively. The spectra curves are the sum of each trace by the spatial Fourier transform along the depth. The Frontiers in Earth Science | www.frontiersin.org September 2021 | Volume 9 | Article 731697 red one is generated from the image inverted by L1-regularized LSRTM based on Born approximation. The blue and green spectrum curves are produced by unconstrained and constrained LSRTM with Kirchhoff approximation, respectively. Because of the spatial derivative and sparse constraint, the spectrum of the image inverted by L1-LSRTM with Kirchhoff approximation has more high-wavenumber components than that of Born approximation, which explains that Kirchhoff approximation improves the resolution of the image.
CONCLUSION
The LSRTM recasts classical seismic inversion as a linear inverse problem. By means of linear approximation, physical model is related to the corresponding wavefield. Thereafter, we can reduce the residual between predicted and observed data iteratively to directly invert the interest parameters. This study introduces two linearization methods. Born approximation obtains the relationship between the model and physical response based on perturbation theory. With the help of the Born operator, we derive another type of linear method, namely the Kirchhoff operator, which relates the reflectivity to wavefield explicitly. Moreover, these two methods have a relationship of a spatial derivative, and there is a phase shift Frontiers in Earth Science | www.frontiersin.org September 2021 | Volume 9 | Article 731697 8 between perturbation model and reflectivity model. Although two operators are different physical quantities, the resolution can be improved by Kirchhoff approximation.
LSRTM can mitigate the shortcomings of other migration methods, while the solution is smooth and deviates from the true model. Specifically, there are redundant oscillatory axes in the strata that should be sparsely distributed. Therefore, we reform the question as a sparsity-promoting LSRTM. The SPGL1 algorithm can effectively solve this problem and invert a sparse image that matches the model well. Examples prove the validity of our study.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 5,157.6 | 2021-09-03T00:00:00.000 | [
"Geology"
] |
Identifying nonclassicality from experimental data using artificial neural networks
The fast and accessible verification of nonclassical resources is an indispensable step towards a broad utilization of continuous-variable quantum technologies. Here, we use machine learning methods for the identification of nonclassicality of quantum states of light by processing experimental data obtained via homodyne detection. For this purpose, we train an artificial neural network to classify classical and nonclassical states from their quadrature-measurement distributions. We demonstrate that the network is able to correctly identify classical and nonclassical features from real experimental quadrature data for different states of light. Furthermore, we show that nonclassicality of some states that were not used in the training phase is also recognized. Circumventing the requirement of the large sample sizes needed to perform homodyne tomography, our approach presents a promising alternative for the identification of nonclassicality for small sample sizes, indicating applicability for fast sorting or direct monitoring of experimental data.
I. INTRODUCTION
Quantum technologies promise various advantages over classical technologies. By employing different features of quantum systems that are not present in classical systems, one can, e.g., perform more precise measurements, speed up computations, or share information in a more secure way. These nonclassical properties create possibilities to optimally exploit physical systems for many technological challenges. Light fields, described as continuous-variable systems, play a key role for the transmission and manipulation of quantum information [1]. Due to their infinite dimensions and an accessible control by means of linear optical elements and homodyne detection, they are widely considered for quantum technological applications. In the case of single-mode continuousvariable quantum systems, the central quantum resource is nonclassicality [2,3]. Directly related to the negativities [4,5] of the Glauber-Sudarshan P representation of the quantum state [6,7], nonclassicality manifests itself in different observable characteristics such as photon antibunching [8][9][10], sub-Poissonian photon-number statistics [11,12], and quadrature squeezing [13][14][15][16][17], and can be transformed into other quantum resources such as entanglement [18,19]. The fundamental nature of nonclassicality is exploited for the investigation of the roots of quantum phenomena and several quantum technological tasks such as, e.g., precision measurements.
Due to its crucial importance for quantum technologies, a fast and reliable identification of nonclassicality from experimental observations of the quantum state represents an unavoidable step toward a practical usage of such a resource for quantum technologies. In continuous-variable systems, one of the most common measurement methods is homodyne detection [20]. Advanced state tomography techniques based on this type of measurements have been developed [21,22]. However, nonclassicality certification based on homodyne tomography usually requires many different quadrature measurements and involved analysis tools. A different approach is nonclassicality certification via negativities of reconstructed quasiprobabilities [23] (particularly, the Glauber-Sudarshan P function [24] and the Wigner function [25][26][27][28]). Methods that involve regularizations of quasiprobabilities have been implemented for the singlemode and multimode scenarios [29,30], and more recently, phase-space inequalities have been proposed and tested experimentally [31][32][33]. Finally, a direct nonclassicality estimation without the need for quantum state tomography was proposed in Ref. [34]. Here, the nonclassicality of phase randomized states was classified via semidefinite programming. In all above approaches, to guarantee the detection of nonclassicality with a high statistical significance, extensive measurements must be performed (using different measurement settings or sampling different moments), after which advanced postprocessing is required (estimation of pattern functions, reconstruction of quasiprobabilities, and semidefinite programming, among others). Consequently, these methods are often complex and time consuming. A direct access to nonclassicality identifiers from unprocessed and finite homodyne-detection data is therefore desirable.
In this paper, we use ML techniques to identify nonclassicality of single-mode states based on a finite number of quadrature measurements recorded via balanced homodyne detection. For this purpose, we employ a dense artificial NN and train it with supervised learning of simulated homodyne detection data from several noisy classical and nonclassical states. We demonstrate the successful performance of the NN nonclassicality prediction on real experimental data and compare the results with established nonclassicality identification methods. Furthermore, we test the performance of the network for experimentally generated states which were not used in the training procedure and show that the NN can identify different nonclassical features at once. We conclude that the ML approach offers an accessible alternative for the classification of single-mode nonclassicality, and, particularly, due to its performance on small sample sizes, the presented approach constitutes a powerful tool for data pre-selecting, sorting, and on-site real-time monitoring of experiments. Our result represents an approach to train NNss for identifying nonclassicality of single-mode phasesensitive states, here measured by homodyne detection.
The paper is structured as follows. In Sec. II, we briefly recall the technique of single-mode balanced homodyne detection. In Sec. III, we describe in detail the training of the NN and the resulting nonclassicality identifier. In Sec. IV, we apply the NN to experimental homodyne measurement data and then analyze its performance on untrained data in Sec. V B. We summarize and conclude in Sec. VI.
II. BALANCED HOMODYNE MEASUREMENT AND NONCLASSICAL STATES
Any direct experimental investigation of light is based on photodetection. Depending on the information on the quantum statistics of the measured light required, different measurement schemes need to be implemented. For example, photon-counting measurements are not sensitive to the phase of the sensed field. To get information about the phase, interferometric methods have to be applied. In these methods, the field is mixed with a refer- ence beam, the so-called local oscillator (LO). The mixing takes place just before intensity measurements [20,22]. The scheme of balanced homodyne detection is shown in Fig. 1. It consists of the signal fieldρ, the LO, a 50:50 beam splitter (BS), two proportional photodetectors, and the electronics used to subtract and amplify the photocurrents after all. Homodyning with an intense coherent LO gives the phase sensitivity necessary to measure the quadrature variances [56][57][58].
LO LO
This kind of interferometric approach is necessary for the reconstruction of the quasiprobabilities of bosonic states. In principle, all normally ordered moments can be determined from this measurement scheme, including the ones which contain different numbers of creation and annihilation operators. Thus, homodyne detection drastically enlarges our measuring capabilities in a simple way.
The key for the quasiprobabilitiy estimation is to perform measurements for a large set of quadrature phases, which leads ultimately to a proper state reconstruction. Balanced homodyne detection and the subsequent reconstruction of the Wigner function have become a standard measuring technique in quantum systems such as, e.g., quantum light, molecules, and trapped atoms [25][26][27][28].
Although experimentally accessible, phase-space function reconstructions and moment-based nonclassicality criteria require significant amounts of measurement data, computational power, and postprocessing time. Here, we propose a shortcut to this process. Using NNs, we can do an on-the-fly nonclassicality identification with few measurements.
A. Setup of the network
The input vector of the network consists of a normalized histogram (relative frequencies) of homodynedetection data which is collected along a fixed phase setting. To generate the histogram from simulated or experimentally generated data (produced from quadrature-measurement outcomes x), we bin the data into 160 equally sized intervals which cover the interval [−8, 8] [59]. Since the histogram is normalized, input vectors constructed from arbitrary numbers of detection events can be used for the same network.
We use a fully connected artificial NN with an input layer of size 160, an output layer of size 2 and three hidden layers with sizes 64, 32 and 16. The hidden layers are activated with the rectified linear unit, and the output layer is activated with a softmax function. These parameters were chosen for a good performance in discriminating between classical and nonclassical states. The simulated data consisting of 2 × 10 4 input vectors per training family (see below) are split into training data (80%) and validation data (20%). The network is trained until the validation error stops decreasing for more than 10 training cycles.
Considering the experimental data on which we want to test the network's prediction later, we simulate 16000 detection events to generate each training input vector.
We train the NN with data generated from Fock, squeezed-coherent, and single-photon-added coherent states (SPACS) as states that show nonclassical signatures and with coherent, thermal, and mixtures of coherent states as states showing classical characteristics, see Appendix A for a discussion of this choice. All families of states used in the training are summarized together with their parameters in Appendix B. To account for realistic (imperfect) scenarios, we chose an overall efficiency of the homodyne measurement of η = 0.6 [33]. Note that the quantum efficiency, that represents external limitations such as channel or detector efficiencies, can equivalently be used to describe noisy quantum states. Thus, we train the network with data that correspond to the detection of realistic, lossy quantum states.
B. Identification of nonclassicality
In the training process, we assign the value 0 to all classical quadrature data and the value 1 to nonclassical data. The output of the NN is a value r between 0 and 1 that provides a way to discriminate classical and nonclassical data. A high output value (close to 1) indicates the nonclassical character of the tested quadrature data. We choose a threshold value t above which we say that the NN identifies nonclassicality. As our goal is to faithfully identify nonclassicality, we set t = 0.9. This means that, for r > t = 0.9, we conclude that the NN identifies nonclassicality. In this way, we might reject some nonclassical states to be recognized as such, but we minimize the risk of falsely recognizing classical states as nonclassical ones. Note that depending on the specific requirements and the choice of trained and studied states, the value of t can be adapted.
In this context, it is important to stress that the result of the NN can only be an indication for nonclassical states; cf. also Ref. [50]. A certification of nonclassi- Nonclassicality prediction of the neural network (NN) on the training states [coherent, thermal and mixed coherent states as classical ones; Fock, squeezed-coherent and single-photon-added coherent states (SPACS) as nonclassical ones], each in its corresponding state-parameter domain. α is the coherent amplitude, n is the number of photons, andn is the mean number of photons. The gray horizontal line corresponds to the nonclassicality threshold t = 0.9. Note that for the squeezed-coherent states, the squeezing parameter ξ is chosen randomly in ξ ∈ [0.5, 1] and is not shown in this plot. For each Fock state, the NN prediction is tested for four different simulations of the quadrature measurements. For details on the state parameters, see Appendix B.
cality requires full analysis including the evaluation of a nonclassicality test (witness) and a proper treatment of errors. While such an analysis can be rather involved, the proposed NN approach allows one to implement an easy and fast identification of nonclassicality. Therefore, it provides a useful tool for pre-selecting and sorting of data or the online, in-laboratory monitoring of experiments.
C. Performance of the network on trained states
In Fig. 2, we show the output r of the network for the different families of training states in their corresponding parameter ranges. All training families are correctly and consistently recognized to be classical or nonclassical. This holds for the total parameter regions of the considered states (cf. Appendix B), indicating that the training of the NN is successful in the sense that the network learned to correctly classify the states from the training set into classical and nonclassical ones.
IV. APPLICATION TO EXPERIMENTAL DATA
Here, we will use the trained NN for the identification of nonclassicality from experimental quadrature data. We analyze data from two different families of states: single-mode squeezed states and SPACS. This analysis will demonstrate the strength of the network approach as a fast and easy-to-implement characterization tool for experimental data.
A. Squeezed vacuum states
The first nonclassical experimental state we consider is a squeezed vacuum state. The vacuum state is squeezed along the real axis of the coherent plane. Details on the experimental realization can be found in Ref. [60].
In the measurements, the homodyne phase setting is changed continuously within the interval φ ∈ [0, 2π]. The resulting measurement data are then divided into 125 bins of size ∆φ = 2π/125, such that ∼ 16000 detection events are grouped together to constitute an input vector of the NN. For our analysis, the amount of squeezing |ξ exp | and the quantum efficiency η exp of the detectors do not have to be known, which highlights the practicability of the NN prediction.
In Fig. 3 (bottom), we show the prediction of the network for the nonclassicality of the squeezed state with respect to the homodyne phase setting together with the variance of the measured quadrature distribution. Additionally, the quadrature distributions p(x) for φ = 0 and φ = π/2 (solid) compared with the vacuum quadrature distribution (dashed) are displayed (top). It is known that nonclassicality in quadrature data can be verified by observing single-mode quadrature squeezing, see, e.g., Ref. [61]. That is, if the quadrature variance Var[x(φ)] is below the vacuum noise for some values of φ, Var[x(φ)] < 1/4, nonclassicality is detected. We see that the domain of nonclassicality classification of the network coincides well with the domain of nonclassicality detection by sub-shot-noise variance.
In short, we confirm that the NN learns the standard nonclassicality classifier of sub-shot-noise variance. If one is simply interested in the detection of squeezing, measuring the variance of the quadrature distribution remains sufficient. However, as discussed below, in contrast to a mere variance classifier, the NN can learn how to identify further nonclassicality features. It is more flexible than the squeezing condition which recognizes only one specific nonclassical feature, and it can be advantageous in scenarios where the underlying quantum state is not known and cannot be captured by a simple variance condition.
B. SPACS
Let us now analyze the prediction of the network for experimentally generated SPACS, which are the result of the single application of the photon creation operator onto a coherent state. In principle, such states are always nonclassical, independent of the input coherent state; however, they present an evident Wigner negativity and resemble single-photon Fock states only for small coherent state amplitudes. On the other hand, for intermediate amplitudes, they also present quadrature squeezing. Exhibiting a variety of different quantum features in different parameter regions, SPACS are therefore particularly interesting candidates for testing the performance of the NN. The experimental data consist of quadrature values, measured via homodyne detection, for the states Nâ † |α (N is a normalization constant) with 14 different values of α ∈ R + . To experimentally generate such optical states, we injected the signal mode of a parametric down conversion crystal with coherent states obtained from the 786 nm emission of a Ti:Sa mode-locked laser [62]. When the same crystal is pumped with an intense ultraviolet beam, obtained by frequency doubling the same laser, the detection of an idler photon heralds the addition of a single photon onto the seed coherent state. In other words, each idler detection event announces the presence of SPACS along the signal mode. Performing heralded homodyne detection on this mode, we measured the quadrature distributions along 11 different quadrature angles φ for each value of α [62]. Mode mismatch between the seed coherent states and the pump and LO beams, optical losses, electronic noise, and limited detector quantum efficiency in the homodyne measurement setup are the main causes for a non-unit overall efficiency of η exp ≈ 0.6 in the experiment. For each state, 15963 detection events are used to construct the network input vector.
In Fig. 4(a), we show the (binary) prediction of the network for the experimental SPACS data, together with exemplary quadrature distributions p(x) for different combinations of α and φ. We observe that the ability of the NN to identify nonclassicality depends crucially on the homodyne phase setting. For sin φ ≈ 0, SPACS are identified as nonclassical in a wide range of α; cf. Fig. 4(b) for the detailed NN predictions for this case. On the other hand, for suboptimal directions, SPACS are rarely recognized as nonclassical by the NN (except for small α). Also, for large α, SPACS are generally classified as classical in all directions. As a comparison, we show the NN prediction for experimental homodyne data generated by coherent states in Fig. 4(c) for the same parameters as used in Fig. 4(b). The network correctly recognizes coherent states as classical.
The phase-dependent behavior of the NN output for the experimental SPACS can be explained by the fact that, for sin φ ≈ 0, the quadrature distributions differ significantly from the one produced by a coherent state, while for other directions, the corresponding quadrature distributions resemble closely the ones of coherent states [62][63][64]. For small α < 0.5, SPACS resemble singlephoton states and are thus recognized as nonclassical at all quadrature angles [see p(x) for α = 0.32 in Fig. 4]. On the other hand, for large α, the quadrature distribution of the SPACS approaches the one of coherent states also in the optimal direction (φ = 0), and therefore, the NN eventually does not indicate nonclassicality anymore. In this regime, it is known that SPACS can be a good approximation of a coherent state of a larger amplitude [65]. The similarity of the SPACS quadrature distribution p(x) for large α and the distribution from a coherent state explains the difficulty for the NN to classify SPACS as nonclassical in this regime.
To summarize, for an optimal homodyne phase setting, SPACS are identified as nonclassical in a wide range of parameters. It is a direct and simple method for testing nonclassicality of SPACS directly based on quadrature distributions. As we discuss below, this identification is successful even in a parameter regime where the homodyne distribution does not show sub-shot-noise or similarity to Fock states. Therefore, the NN prediction proves operational for several different states and nonclassicality features.
V. INFLUENCE OF THE TRAINING SET AND APPLICATION TO UNTRAINED DATA
In this section, we first discuss the ability of the NN to recognize different features of nonclassicality at the same time. Then, we test its performance to recognize nonclassicality of states that were not seen in the training phase and of measurement data consisting of varying sample sizes.
A. Beyond single-feature recognition
To get some insights into which features are learned by the NN, we examine the performance in recognizing simulated SPACS of a network trained without SPACS; see Fig. 5. We observe that a network which is not trained with SPACS recognizes the latter only in specific parameter regions (teal dots). For |α| ∈ [0, 0.5], SPACS are recognized as nonclassical states due to their similarity to single-photon states. On the other hand, in the parameter domain |α| ∈ [1,2], their nonclassicality is recognized because the variance of the quadrature distribution is significantly smaller than the vacuum variance. Beyond that, the distribution does not resemble Fock states and has a large quadrature variance and is, therefore, not classified as nonclassical. For |α| > 3, the variance approaches the vacuum variance, making a correct classification as nonclassical impossible. In total, we see that the network can identify some SPACS even if they were not part of the training set. The network effectively identifies similarity to Fock states and sub-shotnoise variances. This is one example of the general fact that common features can lead to the identification of untrained data. In comparison, a NN that also used SPACS for its training can only achieve its performance (c.f. Fig. 2) by learning how to recognize similarity to SPACS where they do not resemble Fock or squeezed states. Therefore, we conclude that the network is sensitive to different nonclassical features at the same time and, thus, identifies nonclassicality beyond single features. Hence, a properly trained network can be advantageous, as it can recognize different nonclassical features for which one would otherwise need to implement different test conditions. This is particularly useful if the nonclassical features of the state to be tested are unknown. As we have just seen, a state must not be part of the training set to be recognized by the network. The above analysis also indicates the necessity to train a deep NN to perform this task since simple baseline models like, e.g., sub-shot-noise variance only provide single-feature recognition.
B. Application to untrained data
Now we discuss the performance of the NN on states which are not used in the training. We apply the network to the family of so-called (odd) cat states where α ∈ R + . As all states in this family consist of a coherent superposition of coherent states, they are all nonclassical. In Fig. 6, we show the α-dependent prediction r of the NN for quadrature measurements simulated for |α − . We use quadrature angles (a) φ = π/2 and (b) φ = π/4. For each subfigure, we additionally display the quadrature distribution p(x) for different values of α (solid) compared with p(x) for the same parameters but using a quantum efficiency η = 1 (dashed). For both quadrature angles, the network correctly classifies the states as nonclassical in a significant range of α. Thus, this example shows that the NN can certify nonclassicality also of untrained states. For larger α, cat states are not identified as nonclassical. This behavior can be explained as follows. For small α, the cat state resembles a single-photon Fock state and can therefore be identified as nonclassical. For larger α and measured along φ = π/2 with unit quantum efficiency η = 1, the quadrature distribution develops a nonclassical interference pattern (a, dashed). However, for a realistic efficiency η = 0.6, this interference is smoothed away (a, solid) such that the states are eventually classified as classical. Surprisingly, by choosing a different quadrature angle of, e.g., φ = π/4, the cat states are classified as nonclassical in a wider range of α [ Fig. 6(b)]. This is because the quadrature distribution still resembles a Fock state in this direction. Note that the performance of the NN prediction for cat states can be increased by including this family in the training process.
In summary, the NN is able to identify the nonclassicality also for states that were not used in the training process. However, for an optimized performance, it remains practical to adapt classes of states and parameter ranges in the training, see Appendix A.
C. Influence of the sample size Finally, we want to discuss the prediction of the NN if it is given measurement data with a smaller sample size than that used in the training phase. In Fig. 7, we show the NN nonclassicality prediction r for experimental quadrature data of a SPACS (yellow) and a coherent state (teal) for α = 0.32 and φ = 0. We observe that a NN trained with sample sizes of 16000 (dashed line) can correctly classify these two states for measurement data starting from sample sizes as low as ∼ 800. Decreasing the sample size even further results in false classification of coherent states as nonclassical and vice versa, which renders the NN prediction unreliable in this regime.
This analysis shows the flexibility of the NN even once it has been trained. Importantly, the NN can provide conclusive predictions based on comparably very small sample sizes, which opens the possibility of online classification during measurements or fast (pre-)classification of data. Note that the performance of the NN for small sample sizes can also be improved by training it with the corresponding sample size. Prediction r of the neural network (NN) for experimental data from single-photon-added coherent states (SPACS; yellow) and coherent states (teal) for α = 0.32 and φ = 0 as a function of the measurement sample size. The dashed vertical line indicates the sample size 16000 that was used in the training phase.
VI. CONCLUSIONS
In this paper, we introduced an artificial NN-based nonclassicality identifier for single-mode quantum states of light measured with balanced homodyne detection. We trained the network using simulated homodyne detection data for realistic noisy measurements of different classical and nonclassical states. We observed that the trained network can correctly classify different classical and nonclassical states, i.e., coherent states, squeezed states, and SPACS, from real experimental data. Furthermore, the network recognizes certain nonclassical states that were not used in the training phase of the network. Compared with typical nonclassicality conditions based on homodyne tomography or other more complex nonclassicality tests, the strength of our approach lies in its simple implementation and the fact that only a small amount of data is required. We would like to emphasize that the NN nonclassicality prediction cannot certify nonclassicality and, if necessary, should be complemented by an error-proof nonclassicality witness.
The ML-based classification offers a fast and accessible method to sort and preselect experimental data, considering that it circumvents the need to first perform homodyne tomography or the calculation of complex test conditions and, as we showed, performs well also on small sample sizes. It is furthermore easy to implement and applicable in multiple experimental settings. ML has been used before for the detection of quantum effects [50][51][52]. In this context, it is important to highlight that the presented approach can detect phase-sensitive nonclassical features, which was not possible with previous results [50].
Further, the network approach can be used to search interesting experimental parameter regimes, especially if the production rate of detection events is small. To maximize the accuracy of the NN prediction in experiments, any specific information about possible states and noise (such as phase or amplitude noise) should be included in the training phase. Finally, note that the presented approach can be generalized to multi-mode scenarios and might be adapted to the identification of entanglement in a similar fashion. Also, different additional ML methods such as convolutional layers or regularizations can be considered to optimize the performance of the NN nonclassicality prediction and make it more applicable to untrained data.
states have to be included in the training because, otherwise, training only on classical states with single-peaked quadrature distributions, the NN might interpret doubleor multi-peak structures as features of nonclassicality. However, the choice of which classical state to use here is not unique. For instance, a different classical state that occurs typically in experiments is a phase averaged coherent state, ρ av = 2π 0 dφ |αe iφ αe iφ | /2π. Using ρ av instead of ρ mix in the training results in a similar performance of the NN as in the main text, with the expectation that, for larger α, ρ mix (and therefore also cat states measured along φ = 0) are classified as nonclassical.
This points to an important caveat of the NN classification of nonclassicality: as mentioned in the main text, the different states used in the training phase must be chosen carefully, given the experimental conditions. Training with more families of classical states decreases the probability of false identification of nonclassicality for states that were not seen in the training. At the same time, it makes it harder for the NN to learn nonclassicality features of the corresponding nonclassical training states. This discussion shows that the NN nonclassicality classification, while representing a simple and fast nonclassicality identification if possible input states are known, is not universal and does not yield a strict nonclassicality certification.
Appendix B: Parameters and probabilities used for the simulation of quadrature measurement data Here, we specify the state-dependent quadrature probability distributions and the corresponding parameters used in the simulation of quadrature measurement data for the different states in the main text. In Table I, we list the different states together with the corresponding parameter regions used in the simulations and the quadrature distribution along the quadrature angle φ. Note that we use a vacuum variance of 1/4.
For the simulation of the training data, we fixed a quadrature angle φ = 0. For thermal and Fock states, this restriction does not influence the distribution, as these states are phase insensitive. For coherent states with amplitude α, the distribution along a nonzero φ is equivalent to the one of a coherent state with amplitude α cos φ, measured along a zero quadrature angle. For squeezed coherent states and SPACS, this choice assures that only quadrature distributions which show nonclassical features are used in the training.
As noted in Ref. [59], the different parameter limits are chosen such that the probability for an event outside the considered measurement range, |x| > 8, is small (< 10 −6 ). Note that, for SPACS, we further restrict the parameters (|α| ≤ 3) to a domain where the network is able to separate them clearly from the classical states. For the simulation of the squeezed states, the squeezing parameter is chosen uniformly in ξ ∈ [0.5, 1]. | 6,699 | 2021-01-18T00:00:00.000 | [
"Physics",
"Computer Science"
] |
The emergence and growth of a submarine volcano: The Kameni islands, Santorini (Greece)
The morphology of a volcanic edifice reflects the integrated eruptive and evolutionary history of that sys- tem, and can be used to reconstruct the time-series of prior eruptions. We present a new high-resolution merged LiDAR-bathymetry grid, which has enabled detailed mapping of both onshore and offshore his- toric lava flows of the Kameni islands, emplaced in the centre of the Santorini caldera since at least AD 46. We identify three new submarine lava flows: two flows, of unknown age, lie to the east of Nea Kameni and a third submarine flow, located north of Nea Kameni appears to predate the 1925–1928 lava flows but was emplaced subsequent to the 1707–1711 lava flows. Yield strength estimates derived from the morphology of the 1570/1573 lobe suggest that submarine lava strengths are approximately two times greater than those derived from the onshore flows. To our knowledge this is the first documented yield strength estimate for submarine flows. This increase in strength is likely related to cooling and thickening of the dacite lava flows as they displace sea water. Improved lava volume estimates derived from the merged LiDAR-Bathymetry grid suggest typical lava extrusion rates of (cid:2) 2–3 m 3 s (cid:3) 1 during four of the historic eruptions on Nea Kameni (1707–1711, 1866–1870, 1925–1928 and 1939–1941). They also reveal a linear relationship between the pre-eruption interval and the volume of extruded lava. These observations may be used to estimate the size of future dome-building eruptions at Santorini volcano, based on the time interval since the last significant eruption.
Introduction
Analysis of lava flow morphology can improve our understanding of historical effusive eruptions by providing insights into the evolution of flow fields, lava effusion rates and bulk rheological properties (e.g., [31,14,5]). Lava flows are considered non-Newtonian and are often modeled as Bingham fluids, which require a critical shear stress (yield strength) to be exceeded to initiate viscous flow [34,19]. Determination of rheological properties (e.g., yield strength) provides insight into eruptive behaviour and the origin of flow morphologies (e.g., [12,38,21,30,3]). Morphological studies on terrestrial volcanoes reveal strong positive correlations between, for example, silica content and lava lobe width, which enables the estimation of lava flow compositions on Earth, and elsewhere, by remote sensing (e.g., [41]).
Prior work on the volcanology of Santorini has focused almost exclusively on observations of sub-aerial exposures of lava flows. No previous studies have attempted to understand the submarine volcanic activity that has accompanied the growth of the Kameni islands over the past few thousand years. Earlier work on the Kameni islands used a high resolution LiDAR dataset to map the subaerial extent of the historical dacite lava flows [31]. Pyle and Elliott [31] used a LiDAR dataset acquired in 2004 which comprised 4.52 million point measurements acquired over an area of $8 km 2 . Unfortunately a section of data was missing from the central part of Nea Kameni, due to absorption from low lying cloud. In addition, Pyle and Elliott [31] derived bulk rheological properties of these lava flows, and used the time-predictable nature of historic eruptions on Nea Kameni to develop a forecast for the duration of future dome-forming eruptions, based on the relationship between eruption length and the time interval between consecutive eruptions (pre-eruption interval). This current study is based on a repeat LiDAR survey in May 2012, shortly after a period of volcanic unrest from January 2011 to April 2012, characterized by caldera-wide inflation and increased seismicity [25,29,8]. During this time, weather conditions were more favorable and full LiDAR coverage was achieved. In this paper, we also combine the new 2012 onshore LiDAR dataset with high-resolution swath bathymetry data, to determine for the first time, the morphology of the entire subaerial and submarine volcanic structure of the Kameni islands. This merged dataset provides complete coverage of the historic lava flows, enabling us to map the extent of both onshore and offshore extrusion events in the vicinity of the islands since AD 46 (Table 1 Of Supplementary Material). Updated lava flow outlines and thicknesses are used to refine estimates of erupted volumes for each of the historic flows, including previously unidentified submarine flows and cones. This allows a new analysis regarding the relationship between eruption volume and pre-eruption interval that may be used to forecast the size of future dome-forming events at Santorini.
Methodology
The digital elevation model (DEM) of Santorini and the surrounding seabed was generated by merging onshore LiDAR data of the Kameni islands, high-resolution swath bathymetry of the seabed and a digitized elevation model of the Santorini island group from the Hellenic Military Geographical Service (HMGS).
Onshore LiDAR data was acquired over the central volcanic islands of Nea Kameni and Palea Kameni on the 16th May 2012 by the UK's Airborne Research and Survey Facility's (ARSF) Dornier 228 aircraft. The aircraft was equipped with a Leica ALS50 Airborne Laser Scanner, AISA Eagle and Hawk hyperspectral instruments, a Daedalus 1268 Airborne Thematic Mapper (ATM) and a Leica RCD105 39 megapixel digital camera. Two stand-alone georeferencing systems recorded position measurements at both the sensor and the aircraft frame [23]. The data were combined with ground control measurements from two local continuous GPS stations (MKMN and DSLN) to obtain accurate aircraft position measurements.
The survey comprised 12 north-south flightlines acquired in Single Pulse in the Air (SPiA) mode, at an altitude of $1100 m and average speed of 135 knots (70 ms À1 ). An additional SW-NE flightline was acquired at an altitude of $2000 m and speed of 143 knots (74 ms À1 ), using high resolution Multiple Pulse in the Air (MPiA) (e.g., [35]) LiDAR. The SPiA LiDAR was acquired using a pulse repetition frequency (PRF) of 94.7 kHz and scan frequency of 58.2 Hz. The MPiA flightline was acquired using a PRF of 119 kHz and scan frequency of 55.7 Hz. The final dataset comprised > 40 million point measurements and provided an average point density of $5 points per m 2 .
Following the application of a point cloud filter to remove noisy data points, we generated a new digital elevation model (DEM) from the 2012 LiDAR dataset using gridding functionality available in Generic Mapping Tools (GMT) software. To minimize the rollboresight error (e.g., [40,11]) and the potential for acquisition artefacts in overlapping regions between adjacent flightlines, the point cloud data was resampled and filtered prior to gridding. The data points were then interpolated to a 2-m grid using a continuous curvature surface gridding algorithm.
Multibeam bathymetric surveys were carried out by R/V AEG-AEO of the Hellenic Centre for Marine Research (HCMR), during three cruises carried out in 2001 and 2006, covering an area of 2.480 km 2 over the Santorini volcanic field [26]. The surveys utilized a 20 kHz, hull-mounted SEABEAM 2120 swath system, suitable for operation in water depths between 20 and 6000 m and at speeds up to 11 knots. The system forms 149 beams over a maximum angular coverage of 150°, covering a swath width up to 7.5 times the water depth. The typical water depth in the survey area is 500 m, corresponding to a swath width of 3.75 km. The average position of the ship was determined to within ±10 m by GPS navigation (Trimble 4000). The multibeam data processing included georeferencing using navigation data, removal of erroneous beams, noise filtering, interpolation of missing data and removal of rogue points (e.g., [2]). The digital elevation model (DEM) of the Santorini outer island group (comprising the islands of Thera, Thirasia and Aspronisi) was produced by digitisation of height contours (interval 4 m) of the topographic maps of HMGS and by the triangulation network of HMGS with an accuracy of 4 cm. The three datasets were gridded together at 15 m interval using a continuous curvature polynomial method (Fig. 1). Due to the lack of shallow bathymetric measurements, interpolation was required for near-shore regions with water depths between 0 to 100 m (this corresponds to a maximum distance of $330 m offshore from the Kameni islands).
A higher resolution 5 m grid was obtained for the inside of Santorini caldera, providing a detailed onshore-offshore grid ( Fig. 2) and enabling the first joint mapping of both subaerial and submarine historic lava flows emplaced since at least AD 46 in the centre of the Santorini caldera ( Fig. 3
and Supplementary Figs. 1 and 2).
High-resolution merged LiDAR-Bathymetry grids have been utilised at other volcanoes for improved geomorphological analysis of volcanic deposits and to facilitate hazard mapping [17,32]. In the current study, a series of attribute maps were used to aid the delineation of the extent of flows (both onshore and offshore). These included hillshade, curvature and slope attributes generated using GMT software. The revised lava flow outlines were then used to compute accurate volumetric estimates for each of the historic flows.
Lava flow volumes were estimated in two ways. Firstly, volume calculations were carried out using Surfer software to determine the residual volume between two grid files, representing the post-eruption and the pre-eruption surfaces. This was done sequentially in the reverse chronological order to which the flows were originally emplaced. For example the first post-eruption surface is defined by the present-day grid of the Kameni islands. The first pre-eruption surface is the estimated topography prior to the most recent eruption in 1950. This was generated by stripping off the region within the outline of the 1950 lava flow and creating a new surface by interpolating the data gap using a Natural Neighbour Gridding Method. This method produces a smooth surface, simulating the relief prior to emplacement of the lava flow and was repeated for each of the historic flows so that the simulated pre-eruption grid becomes the post-eruption grid for the earlier lava flow. The volumes for the individual flows were generated by subtracting the hypothetical pre-eruption surfaces from the post-eruption surfaces. A similar DEM differencing technique was employed by Coltelli et al. [4] and Neri et al. [24] to determine lava flow volumes for historic eruptions at Mt Etna.
To gain a better understanding of the errors associated with the volume estimates, a second method was employed using GMT software. In this instance polygons outlining the flows from each historic eruption were again used to mask out the target flow from the DEM. This time the gap in the data was interpolated using the same continuous curvature surface gridding algorithm as was used to produce the present-day DEM, with the variable tension adjusted to derive a smooth near-flat pre-eruption surface across the masked region. In general we found that a tension of 0.9 was optimal to produce geologically reasonable surfaces with the exception of the 1950 flow, where a tension of 0.2 was used. The DEM subtraction technique was then used to compute the residual volume between the new pre-and post-eruption surfaces, providing a second set of volumetric estimates for each of the historic lava flows (Supplementary Fig. 3 and Supplementary Material).
Pre-eruption interval was plotted against volume of extruded lava and eruption duration to determine the relationship between these eruptive properties. Finally a series of profiles were extracted from the DEM to determine flow height and flow width for the offshore segment of the 1570/1573 lava flow and estimates of yield strength were derived, assuming a Bingham rheology and using the flow width method [12,13].
Results and discussion
The new DEM of Santorini Caldera (Fig. 1) is the foremost highresolution merged dataset since Druitt et al. [6], who initially presented a relief model of Santorini caldera showing the subaerial topography and submarine bathymetry. The caldera walls rise to over 300 m above sea level, while the maximum depth of the caldera seafloor is about 390 m below sea level. The present configuration of the caldera consists of three distinct basins that form separate depositional environments [28]. The North Basin is the largest and the deepest (389 m) developed between the Kameni islands, Thirasia and the northern part of the Santorini caldera. It is connected by a narrow steep-sided channel with a depth of 300 m to a scallop-shaped ENE-WSW aligned feature that lies outside Santorini caldera, NW of Oia Village.
The smaller West Basin is encompassed by Aspronisi islet, Palaea Kameni and Southern Thirasia with a moderate maximum depth -up to 325 m. The flanks of the basin are gentle in the western part and steepen close to Thirasia and Aspronisi. The South Basin is bounded by the Kameni islands (to the north) and the southern part of the Santorini caldera (to the south). It covers a larger area and is shallower by $28 m than the western basin. The seafloor morphology suggests that the southern basin has been separated from the western and northern basins by the development of a series of subaerial and submarine volcanic domes, aligned in a NE-SW direction. Apart from the subaerial Kameni islands, the most well-known submarine extrusion is the reef close to Fira Port (referred to here as NK East), which has grown from 300 up to 40 m b.s.l.
The Kameni islands reach a total relief of almost 470 m in the central part of the Caldera and cover an area of $21 km 2 , assuming a perimeter represented by the black dashed line around Kamenis on Fig. 3. The submarine structure of the islands to the north and to the south is very different. Submarine lava flows can be observed to the east (NK East and Drakon), NE and NW of Nea Kameni and WSW of Palea Kameni (Fig. 3). In contrast, the southern part of Nea Kameni is characterised by abrupt submarine volcanic cliffs up to 250 m high [26,28].
Mapping of historic lava flows
The new high-resolution DEM of the Kameni Islands and the surrounding seabed reveals intricate details of the surface morphology (from 380 m b.s.l. up to 127 m a.s.l) of young dacite lava flows, cones and domes (Fig. 2a), from which important morphological information can be extracted. The lava flows identified in the LiDAR data can now be followed beyond the shoreline and onto the sea floor, specifically the submarine continuation of historical lava flows in the northern part of Nea Kameni and at the NW and SW part of Palea Kameni. The ridge east of Aspronisi connecting to the Kameni islands edifice (Fig. 1) may be made up of young lava domes as suggested by Sigurdsson et al. [36], or it may represent a ridge of older rocks (continuation of the Aspronisi islet) isolated by collapse events from the western and southern basins. The submarine structures, east of Aspronisi are highlighted in the slope steepness plot, as are the volcanic domes east of Nea Kameni (Fig. 2b). There is also an individual small volcanic dome-like structure south of Nea Kameni named Konus (Fig. 3a) which could also represent a large block that has fallen off Nea Kameni.
Pyle and Elliott [31] suggested that the Kameni line (a proposed active volcano-tectonic fault/fracture zone) may control vent locations for both historic and future dome building eruptions on the Kameni islands. Our dataset corroborates this hypothesis; as the submarine volcanic dome within the NK East flow (40 m depth) [27] and the potential submarine volcanic domes east of Aspronisi [36,28] continue the NE-SW trend following the Kameni Line as defined by the volcanic vents across the Kameni Islands [10,6,7,31]. Furthermore, the Kameni Line appears to divide the flooded caldera.
The attribute maps (displayed in Fig. 2b, c) assisted in mapping the extent of the submarine and subaerial historic lava flows extruded in the centre of the caldera since 46AD. Fig. 2b shows the distribution of slope gradient within the studied area and allows the identification of five zones: (1) sub-horizontal areas with mean morphological slope 0-5°, (2) Similarly the on-shore lava flows from various historic eruptions on Nea Kameni display relatively steep slopes combined with extended relatively low slope surfaces on the body of the flows. This morphology is indicative of the relatively young age of the volcanic activity of the islands [31].
Palea Kameni has steep on-shore slopes in the SE part with an axial structure trending NW-SE, and a planar surface at the top up to >100 m in height. The submarine slope gradient around the island diminish gradually from 20-40°to 5-10°with the exception of lava flows to the SW and NW of the island. These submarine lava flows are characterized by moderately sloping tops dipping at 10-20°with steep frontal slopes of 20-40°.
The profile curvature map of Nea Kameni (Fig. 2c) is calculated by computing the second horizontal derivative of the DEM surface in the direction of the maximum slope. Areas of high positive curvature represent upwardly concave regions (hills) whereas negative curvatures depict upwardly convex regions (valleys). This attribute was useful in delineating the edges of different historic lava flows. It also highlights some channel levées and compression folds, both in the onshore and offshore data. Levée structures, tens of meters wide and tens of meters high, develop close to the vents. Channelized flows within the levées show prominent ridges with wavelengths $20-40 m and amplitudes $1-4 m at the northeastern offshore part of Nea Kameni. Fig. 3 shows the new onshore/offshore lava flow outlines which were used to calculate revised volumetric estimates for each of the historic flows and generate graphs of pre-eruption interval against the volume of extruded lava (section 'Volume estimates and rates of lava effusion'). Given the challenges of correlating lava flows from onshore and offshore locations, and attributing lobes covered over by subsequent flows, there are considerable uncertainties in these estimates. The lack of data between depths of 0-100 m adds to these uncertainties. While we made reasonable correlations given the data in hand, some of these correlations remain uncertain, and there may well be older flows which have not been identified. For example, the eruption of 197 B.C. apparently produced a large island called Hiera (see [9,31]), which must have later subsided. Fouqué [9] suggested this corresponded to a submarine reef, called Bancos, that lay between Nea Kameni and Fira. Bancos was subsequently buried by the 1925-1928 lavas. Likewise, there are assumptions inherent in the onshore-offshore correlations made for the 46-7, 726 and 1570/73 flows. The records of these eruptions are scant but, for example, Fouqué [9] reports that 17th Century accounts of the 1570/1573 eruption suggest that it lasted for a year, so it is not unreasonable that it might have formed as substantial a lava flow as we infer. These three flows are not included in the graphs of pre-eruption interval against the volume of extruded lava (section 'Volume estimates and rates of lava effusion'). We attempt to take account of these uncertainties by constructing minimum and maximum polygons for several of the flows (46-47, 726, 1707-1711 and 1866-1870). For the 46-47, 726, 1707-1711 and 1866-1870 flows the solid line in Fig. 3a represents the possible minimum extent and the dashed line the possible maximum extent. These multiple flow outlines were used in the volumetric calculations to provide a range of estimates.
Flow identification, morphologies and yield strength
The new dataset provides a wealth of information, including previously unidentified submarine lava flows and cones, as well as some interesting submarine morphologies. Lava flow morphologies are dependent on a number of factors, including the temperature, yield strength, viscosity, effusion rate and local topographic gradient [16], and references therein). Although several morphological studies have been undertaken on the subaerial-submarine transition of lava flows these have been mostly limited to basaltic pā hoehoe and 'a'ā flows (e.g., [20,39,37]). The historic flows emplaced on Nea Kameni since 1570/1573 are classified as blocky lava flows. These flows are characterized by a broken fractured crust, comprising smooth angular blocks of dm-to m-scale. The flows are typically tens of meters thick and several kilometers long. Blocky lava flows tend to advance as single units, forming channels and occasional lava tubes [15]. These in turn feed the advancing lava front, which crumbles to produce a snout and may trigger small block and ash flows. Overflow levees develop along the edges of channels as the crust along the channel margin begins to cool and solidify. An example of this may be seen in the medial section of the 1570/1573 submarine flow and also in a flow situated offshore, north of Nea Kameni (NK North) (Fig. 3). An archetypal example of a single feeder channel is visible in the centre of the 1707-1711 lava flow, on the north-west edge of Nea Kameni (indicated by a black arrow in Fig. 5). Flow breaching is less common in blocky lava flows. However, a good example is visible within the onshore 1707-1711 lava flow and possibly within a thin channel in the central portion of the NK north flow (Fig. 3).
Although there are typically more similarities between onshore pā hoehoe and blocky lava flows in terms of the crust and flow emplacement mechanisms, our dataset suggests that during and following the subaerial-submarine transition dacitic blocky lava flows may in fact have more in common with 'a'ā flows. Both Stevenson et al. [39] and Mitchell et al. [20] observed that during the subaerial-submarine transition of pā hoehoe and 'a'ā flows, the pā hoehoe flows exhibited a significant change in slope upon entering the water and were more likely to become arrested, whereas the 'a'ā flows remained comparatively unchanged and often advanced for several hundred meters. Several similarities are apparent between the documented transitional behavior of 'a'ā flows and the Nea Kameni 1570/1573 lava flows. The feature that we interpret as the submarine lava lobe from the 1570/1573 eruption displays a well-rounded margin likely developed via lateral spreading of the lava front [15], and continues to extend offshore for several hundred meters. If our interpretation of this flow is correct, then it would appear that the marine transition had little impact on the reduction of flow length for blocky dacitic flows -most likely attributable to the substantial flow thickness which in turn minimizes internal cooling.
Thin channels and possible tumulus-like structures are identified to the north of Nea Kameni within the NK north flows (Fig. 3a). The latter were identified during a cruise undertaken in 2011 on board E/V Nautilus. They are possibly formed by the influx of new lava increasing the internal pressure and fracturing of the outer crust. Other evidence for flow pressurization may include the longitudinal cleft identified in the center of the 1570/1573 off-shore lava lobe. This likely developed from the extension of a solidified, chilled crust via the continued movement and pressurization of a fluid core (e.g., [20]) or via lateral spreading on a convex upwards surface.
Figs. 5 and 6 display a series of elevation profiles across each of the historic lava flows. Of particular interest are the submarine extension of the 1570/1573 flow, the NK East and Drakon flows to the east of Nea Kameni and a newly identified submarine flow NK North. The new flow was initially interpreted as an extension of the onshore 1925-1928 lavas because of its proximal location. However, following the analysis of elevation, slope and hillshade attributes (Fig. 2) along with volume calculations and historic reports [31], we suggest that these flows were extruded during an unreported submarine eruption. Analysis of flow morphologies suggests this extrusion occurred sometime after the 1707-1711 eruptions, but prior to the 1925-1928 eruption. The internal structure of the NK North flow suggests that the flow paths were determined by the pre-existing topography of the 1707-1711 flow and an obvious break in slope is visible on both the hillshade attribute and the north-south oriented traverses that transect both the NK North and 1925-1928 flows ( Fig. 6(a) and (e)). This break in slope appears at the edge of submarine talus deposits eroded off the 1925-1928 flows, which were identified during an oceanographic cruise in September 2011 [27]. In addition, the general morphology of the flow also changes significantly after this break in slope and is more similar to that observed on the Drakon lava flows, exhibiting submarine twin cone structures and intervening ridges. This morphology appears to be characteristic of submarine extrusions/eruptions; and has been identified on El Hierro, Canary Islands [33] and in the Marianas Arc [1]. To enable comparison of yield strength estimates for onshore lava flows (computed by [31] with those of offshore lava flows, yield strengths were calculated for the submarine 1570/1573 lava lobe using flow widths and heights extracted from a series of transverse profiles displayed in Fig. 5. The 1570/1573 flow was determined to be the most suitable submarine flow for this calculation due to its characteristic lava lobe morphology discussed above. The flow width method [12,13] was used for this analysis, as it is considered more reliable than the levée width or levée height methods [38,31], based on the larger uncertainties associated with the latter techniques in estimating the flow slope (at the time of emplacement) and the levée width. For a lava flow of width W, on a flat surface, the yield strength Y, may be calculated using equation 1: where Dq is the density contrast between the lava flow and seawater (taken as 1680 kg m À3 ), h is the height of the lava flow and g is the gravitational acceleration. The yield strength estimates are displayed in Table 1. The submarine lava flow displays an average width of 719 m, height of 74 m and yield strength of (129 ± 64) Â 10 3 Pa. This is approximately twice as large as estimates reported by Pyle and Elliott ( [31] Table 3) for onshore lava [41]. To our knowledge this is the first documented yield strength estimate for submarine lava flows. The increase in apparent yield strength may reflect the transition from subaerial to submarine emplacement and is likely associated with increased cooling of the outer margins of the flow [12,18,22] or possibly a result of flow thickening during the onshore/offshore transition [37].
Volume estimates and rates of lava effusion Table 2 summarises the revised lava flow volume estimates for each of the historic eruptions of the Kameni islands. This includes estimates for two separate offshore flows east of Nea Kameni (NK East and Drakon), one north of Nea Kameni (NK North) and a small cone identified offshore, south of the 1866-1870 flow (Konus). In Table 3, we estimate lava effusion rates for four historic eruptions (1939-1941, 1925-1928, 1866-1870 and 1707-1711) on Nea Kameni. This analysis suggests a typical effusion rate during dome-forming eruptions on Nea Kameni of $2-3 m 3 s À1 . This is almost twice the average rate derived from the onshore data alone [31]. Fig. 7 reveals a linear relationship between pre-eruption interval (the intervening period between eruptions from historical records) and the volume of lava extruded in historic dome-forming eruptions. This correlation allows estimation of the size of future eruptions on Nea Kameni -for example, if an eruption were to occur in the next few years the pre-eruption interval period would be $75 years (time since the last significant eruption on Nea Kameni from 1939 to 1941). The volume of extruded lava would be of order 8 Â 10 7 m 3 and the eruption would continue for 2-3 years. Geodetic studies suggest that a possible melt volume of 1-2 Â 10 7 m 3 has already been supplied to the shallow magma chamber during the recent (2011-2012) period of unrest [25,29].
Long-term rates of lava effusion and edifice growth
The total volume of the Kameni islands is 3.2 km 3 , of which about 0.5 km 3 has erupted since 1570 AD (Table 2). Although it is clear that the entire Kameni edifice has formed and grown since the caldera-forming Minoan eruption of ca. 1600 BC, there are few constraints on when post-caldera volcanism began. The earliest reports, from Strabo, that are interpreted as evidence of an emergent island date back to 199-197 BC (Supplementary Table 1), though archaeologists have speculated that an earlier volcanic event may have triggered the departure of residents from Santorini and the formation of Cyrene in 630 BC [42]. Since 1570 AD, assuming that the most significant eruptions have been detected, the time-averaged extrusion rate (covering both periods of eruption and repose) has been $10 6 m 3 /yr, or 0.035 m 3 s À1 . At these extrusion rates, the entire edifice of the Kameni islands would have been extruded in around 3200 years which is within the bounds of the time period since the Minoan eruption. If average lava extrusion rates have been approximately constant since the initiation of the extrusion of the Kameni Islands, this suggest that the first post-caldera eruptions started around 1200 B.C. long before the first historical reports of the islands emerging in 199 B.C.
Conclusions
This study highlights the benefits of combining high-resolution LiDAR and multibeam bathymetry data to accurately map the subaerial and submarine extensions of lava flows at partially submerged island volcanoes. The new dataset reveals a wealth of information regarding the emplacement of historic lava flows and insight into bulk rheological properties of the magma. Several previously undetected submarine flows have been identified off the northern and eastern coasts of Nea Kameni. The ages of the NK East and Drakon flows are currently unknown, however the NK North flow appears to have been emplaced prior to the 1925-1928 lava flows but subsequent to the 1707-1711 lava flows. Apparent yield strength estimates from the submarine 1570/ 1573 flow suggest a twofold increase in lava strength upon entering the ocean.
Accurate volumetric estimates derived from the merged LiDAR-Bathymetry grid suggest typical lava extrusion rate of $2 m 3 s À1 during historic eruptions on Nea Kameni, which is approximately two times faster than the initial estimate by Pyle and Elliott [31] based solely on onshore data. The new volumetric estimates have allowed us to expand on the original work of Pyle and Elliott [31], in terms of forecasting the characteristics of future eruptions at the Kameni Islands. We present a new relationship between the volume of erupted lava and the intervening period between eruptions based on the revised volumetric estimates. This will enable forecasting of the magnitude of new lava extrusions on the Kameni Islands at the onset of future dome-forming eruptions. Our volume estimates of flows emplaced since 1570 AD lead to an average extrusion rate of $10 6 m 3 /yr. At this rate the entire Kameni islands edifice could have been emplaced in $3200 years suggesting that activity here may have started around 1200 B.C., shortly after the Minoan eruption. | 7,119 | 2014-03-01T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Influence of GeO2 Content on the Spectral and Radiation-Resistant Properties of Yb/Al/Ge Co-Doped Silica Fiber Core Glasses
In this study, Yb/Al/Ge co-doped silica fiber core glasses with different GeO2 contents (0–6.03 mol%) were prepared using the sol–gel method combined with high-temperature sintering. The absorption, fluorescence, radiation-induced absorption, continuous-wave electron paramagnetic resonance spectra, and fluorescence decay curves were recorded and analyzed systematically before and after X-ray irradiation. The effects of GeO2 content on the valence variations of Yb3+/Yb2+ ions, spectral properties of Yb3+ ions, and radiation resistance of Yb/Al/Ge co-doped silica glasses were systematically studied. The results show that even if the GeO2 content of the sample is relatively low (0.62 mol%), it can inhibit the generation of Yb2+ ions with slight improvement in the spectral properties of Yb3+ ions in the pristine samples and effectively improve its radiation resistance. Direct evidence confirms that the generation of trapped-electron centers (Yb2+/Si-E’/Al-E’) and trapped-hole centers (Al-OHC) was effectively inhibited by Ge co-doping. This study provides a theoretical reference for the development of high-performance, radiation-r esistant Yb-doped silica fibers.
Introduction
Over the past few decades, rare-earth (RE)-doped silica fiber lasers and amplifiers have been studied extensively [1][2][3]. Owing to their reduced weight, small size, high output power, good beam quality, and high electro-optical conversion efficiency, Yb-doped fiber (YDF) lasers and amplifiers have applications in industrial manufacturing [4], laser lidar [5], space debris removal, and free-space optical communication [6]. However, the high laser output is strictly limited by the fact that YDF suffers from two-fold degradation [7,8], commonly referred to as the photodarkening (PD) and radiation darkening (RD) effects [9,10], when operating in amplifying conditions and harsh environments. PD and RD can be induced by pumping and external ionizing radiation, resulting in additional optical loss and a sharp decline in laser output power and slope efficiency.
Researchers generally conclude that the generation of color centers in the fiber is the main reason for the appearance of the darkening effect. Similar loss variations appear after long-term pumping or γ-ray irradiation, indicating that the types of color centers generated in PD and RD are the same [11]. Although the origin and mechanism of color centers are controversial, many studies have confirmed that Yb 2+ ions and oxygen hole center defects are the main causes of PD and RD in YDF [12].
The main methods to suppress the effects of PD and RD are gas loading and core composition optimization. Many researchers have demonstrated that H 2 -loading is beneficial for improving the radiation resistance of fibers [13,14], but there are some limitations 2 of 12 as well. For example, H 2 molecules can easily escape, the mechanical strength of the coating may decrease, and H 2 causes additional absorption [15]. Even worse, owing to the reducing environment caused by H 2 , Yb 3+ ions are easily reduced into Yb 2+ ions [16]. Core composition optimization is the most fundamental method. Ce co-doping has been used to suppress the formation of aluminum-oxygen hole center (Al-OHC) defects [17,18], and the valence variation of Ce 4+ →Ce 3+ reduces the generation of Yb 2+ ions and further improves the radiation resistance. However, this presents the problem that Ce co-doping seriously impacts the spectral properties of Yb 3+ ions [19]. Jetschke et al. proposed a method to improve the PD resistance by Al/P co-doping [20]. However, it reduces the absorption and emission cross-sections of Yb 3+ ions, and the radiation resistance of the Yb/Al/P co-doped fiber is relatively weak [21,22]. It has been confirmed that Ge co-doping can reduce the radiation-induced absorption (RIA) caused by Al-and/or P-related defects during irradiation [23,24] and improve the radiation resistance of Er/Al/Ge co-doped glasses and fibers without negatively affecting the spectral properties of Er 3+ ions [25]. However, the effects of Ge co-doping on the spectral and radiation-resistant properties of Yb-doped silica glasses have not yet been reported.
The absorption band caused by radiation-induced color centers is mainly in the ultraviolet-visible range [12]. Limited by Rayleigh scattering, the absorption of the fiber sample is relatively large in the ultraviolet-visible band; therefore, it is difficult to collect accurate data in the fiber. In addition, continuous-wave electron paramagnetic resonance (CW-EPR) is an effective method for studying paramagnetic color centers. Owing to the small size of the fiber core, it is difficult to obtain powder samples for CW-EPR tests. Therefore, in this work, uniform and large-scale Yb/Al/Ge co-doped silica glasses with different GeO 2 contents were prepared using the sol-gel method combined with hightemperature powder sintering. By analyzing the absorption, fluorescence spectra, and fluorescence decay curves of the glasses, the effects of Ge co-doping on the Yb 2+ ion suppression and the Yb 3+ ion spectral properties were studied. By comparing the changes in luminescence intensity and fluorescence lifetime of Yb 3+ ions before and after irradiation, the effect of Ge co-doping on the radiation resistance of Yb/Al/Ge co-doped glasses was studied, and the related mechanisms were revealed by RIA and CW-EPR spectra. 1,4,8,12) glasses were prepared using the sol-gel method combined with high-temperature powder sintering [26]. The glasses were labeled GAY0, GAY1, GAY4, GAY8, and GAY12 (referred to as GAY samples) based on the theoretical content of GeO 2 . In addition, Al-doped and Ge-doped silica glasses were prepared using the same method for comparative analysis. The actual compositions of the samples were measured using inductively coupled plasma optical emission spectrometry (ICP-OES; radial-view Thermo iCAP 6300, Thermo Fisher Scientific Inc., Waltham, MA, USA). The theoretical and actual compositions are presented in Table 1. The actual Yb 2 O 3 content was close to the theoretical values. Even if part of the GeO 2 volatilized, 50-62% of the theoretical content remained. The actual Al content was higher than the theoretical content because the glasses were polluted by the corundum crucible. The samples were polished into bulk glasses of approximately 10 mm diameter and 2 mm thickness for further testing. The powdered samples, which were utilized for the CW-EPR tests, had masses of approximately 100 mg.
Analyses of Samples
The densities of the samples were measured using an SD-200L electronic densimeter based on the Archimedes drainage method. The refractive indices of the samples were measured using a 2010/M prism coupler (Metricon Corp., Pennington, NJ, USA). The absorption spectra were measured using a Perkin-Elmer Lambda 950UV/VIS/NIR spectrophotometer. Fluorescence spectra and fluorescence decay curves were measured using an FLS920 steady/transient state high-resolution fluorescence spectrometer (Edinburgh Instruments Ltd., Kirkton Campus, UK). The fluorescence spectra of Yb 3+ ions were measured using a xenon lamp under 896 nm excitation. The fluorescence decay curves of Yb 3+ ions were measured at 1020 nm under excitation by a 980 nm LD. The fluorescence spectra and fluorescence decay curves of Yb 2+ ions were both measured at 525 nm under the excitation of a 330 nm xenon lamp, and the fluorescence decay curve of the Ge oxygen deficiency center (Ge-ODC) was measured at 395 nm under the excitation of a 330 nm xenon lamp.
To study the effect of GeO 2 content on the radiation resistance of Yb/Al/Ge co-doped silica glasses, an X-ray instrument (XRad160, Precision X-Ray, Inc., Madison, WI, USA) was used to irradiate the samples. The total dose and dose rate of irradiation were 3000 Gy and 1440 Gy/h, respectively.
The paramagnetic defects induced by irradiation were recorded using an E-580 Bruker Elexsys X-band EPR spectrometer (Bruker Co., Billerica, MA, USA). The microwave frequency was approximately 9.38 GHz. All the measurements were conducted at room temperature (300 K). Although Ge co-doping may cause a higher refractive index of the fiber core, the numerical aperture (NA) of the fiber can still be controlled by creating a graded refractive index layer [27,28]. Figure 2a shows the absorption spectra of the pristine GAY samples with different GeO2 contents in the range of 270-550 nm. The Ge-free (GAY0) sample has an absorption band at 330-450 nm, which is primarily attributed to the 4f-5d transition of Yb 2+ ions [29,30]. With an increase in the GeO2 content, the absorption intensity of the Yb 2+ ions is significantly reduced. Even in the GAY1 sample (GeO2 content of 0.62 mol%), the absorption intensity of Yb 2+ ions is very weak. This shows that a low doping concentration of Ge can effectively suppress the formation of Yb 2+ ions. As shown in the inset of Figure 2a, owing to the higher absorption intensity of Yb 2+ ions, the GAY0 sample was yellowish. In contrast, the GAY12 sample was colorless and transparent. Figure 2b shows the fluorescence spectra of the pristine GAY samples in the range of 340-700 nm excited by a 330 nm xenon lamp. The fluorescence spectrum of the GAY0 sample has a very broad emission band at 525 nm, which was caused by Yb 2+ ion emission [29], and it is primarily attributed to the 4f-5d transition of Yb 2+ ions [30]. With an increase in the GeO2 content, the fluorescence intensity of Yb 2+ ions decreased, and a new fluorescence peak appeared and increased simultaneously at 395 nm, which can be attributed to the Ge-ODC defect emission [31]. The absorption of Ge-ODC mainly originates from the S0→S1 transition, and the emission of Ge-ODC primarily originates from the transition of T1→S0. The energy level diagram of Ge-ODC can be found in Ref. [32]. As shown in the inset of Figure 2b, the GAY0 sample emits yellowish light under ultraviolet light, which is mainly attributed to the emission of Yb 2+ ions. Meanwhile, mainly attributed to the emission of Ge-ODC defects, the GAY12 sample emits bluish light. These results further confirm that Ge co-doping can effectively suppress the generation of Yb 2+ ions. Figure 2a shows the absorption spectra of the pristine GAY samples with different GeO 2 contents in the range of 270-550 nm. The Ge-free (GAY0) sample has an absorption band at 330-450 nm, which is primarily attributed to the 4f-5d transition of Yb 2+ ions [29,30]. With an increase in the GeO 2 content, the absorption intensity of the Yb 2+ ions is significantly reduced. Even in the GAY1 sample (GeO 2 content of 0.62 mol%), the absorption intensity of Yb 2+ ions is very weak. This shows that a low doping concentration of Ge can effectively suppress the formation of Yb 2+ ions. As shown in the inset of Figure 2a, owing to the higher absorption intensity of Yb 2+ ions, the GAY0 sample was yellowish. In contrast, the GAY12 sample was colorless and transparent. Figure 2b shows the fluorescence spectra of the pristine GAY samples in the range of 340-700 nm excited by a 330 nm xenon lamp. The fluorescence spectrum of the GAY0 sample has a very broad emission band at 525 nm, which was caused by Yb 2+ ion emission [29], and it is primarily attributed to the 4f-5d transition of Yb 2+ ions [30]. With an increase in the GeO 2 content, the fluorescence intensity of Yb 2+ ions decreased, and a new fluorescence peak appeared and increased simultaneously at 395 nm, which can be attributed to the Ge-ODC defect emission [31]. The absorption of Ge-ODC mainly originates from the S0→S1 transition, and the emission of Ge-ODC primarily originates from the transition of T1→S0. The energy level diagram of Ge-ODC can be found in Ref. [32]. As shown in the inset of Figure 2b, the GAY0 sample emits yellowish light under ultraviolet light, which is mainly attributed to the emission of Yb 2+ ions. Meanwhile, mainly attributed to the emission of Ge-ODC defects, the GAY12 sample emits bluish light. These results further confirm that Ge co-doping can effectively suppress the generation of Yb 2+ ions. The absorption and emission of Yb 3+ primarily originates from 4f-5d transition [30]. The absorption curves of the samples with different GeO2 contents roughly overlap, and the absorption intensities of the samples at 976 nm are almost the same. In addition, the fluorescence spectra of the samples under 896 nm excitation roughly coincide. The fluorescence intensity of the Ge co-doped samples is slightly higher than that of GAY0. This may be due to the suppression of Ge co-doping with Yb 2+ ions. This proves that Ge co-doping has no negative impact on the absorption and fluorescence spectra of Yb 3+ ions. Figure 4a,b show the absorption and fluorescence spectra of Yb 3+ ions in the GAY samples. The absorption and emission of Yb 3+ primarily originates from 4f-5d transition [30]. The absorption curves of the samples with different GeO 2 contents roughly overlap, and the absorption intensities of the samples at 976 nm are almost the same. In addition, the fluorescence spectra of the samples under 896 nm excitation roughly coincide. The fluorescence intensity of the Ge co-doped samples is slightly higher than that of GAY0. This may be due to the suppression of Ge co-doping with Yb 2+ ions. This proves that Ge co-doping has no negative impact on the absorption and fluorescence spectra of Yb 3+ ions. Figure 5 shows the absorption cross-section at 976 nm, emission cross-section, and fluorescence lifetime at 1020 nm of Yb 3+ ions. The absorption and emission cross-sections were calculated using the Lambert-Beer law and the Fuchtbauer-Lademnurg (F-L) formula [34]. With an increase in the GeO2 content, the absorption cross-section and emission cross-section of Yb 3+ ions hardly changed, but the fluorescence lifetime of Yb 3+ ions increased gradually. The formation mechanism of Yb 2+ ions in Yb/Al co-doped silica glasses during vacuum sintering may be related to the disappearance of Al-related oxygen deficiency centers (Al-ODC). The corresponding chemical reactions are as follows:
Results and Discussion
where, " • " and "°" represent an electron and a hole, respectively. This speculation is supported by the results of Kirchhof et al. [30]. Al-, P-, and Ge-related oxygen defect centers Figure 5 shows the absorption cross-section at 976 nm, emission cross-section, and fluorescence lifetime at 1020 nm of Yb 3+ ions. The absorption and emission cross-sections were calculated using the Lambert-Beer law and the Fuchtbauer-Lademnurg (F-L) formula [34]. With an increase in the GeO 2 content, the absorption cross-section and emission cross-section of Yb 3+ ions hardly changed, but the fluorescence lifetime of Yb 3+ ions increased gradually. Figure 5 shows the absorption cross-section at 976 nm, emission cross-section, and fluorescence lifetime at 1020 nm of Yb 3+ ions. The absorption and emission cross-sections were calculated using the Lambert-Beer law and the Fuchtbauer-Lademnurg (F-L) formula [34]. With an increase in the GeO2 content, the absorption cross-section and emission cross-section of Yb 3+ ions hardly changed, but the fluorescence lifetime of Yb 3+ ions increased gradually. The formation mechanism of Yb 2+ ions in Yb/Al co-doped silica glasses during vacuum sintering may be related to the disappearance of Al-related oxygen deficiency centers (Al-ODC). The corresponding chemical reactions are as follows: where, " • " and "°" represent an electron and a hole, respectively. This speculation is supported by the results of Kirchhof et al. [30]. Al-, P-, and Ge-related oxygen defect centers The formation mechanism of Yb 2+ ions in Yb/Al co-doped silica glasses during vacuum sintering may be related to the disappearance of Al-related oxygen deficiency centers (Al-ODC). The corresponding chemical reactions are as follows: where, " • " and " • " represent an electron and a hole, respectively. This speculation is supported by the results of Kirchhof et al. [30]. Al-, P-, and Ge-related oxygen defect centers (ODC) are easily formed during sintering in a vacuum or inert atmosphere (e.g., He). However, Kirchhof et al. found that in 6P 2 O 5 -94SiO 2 and 7GeO 2 -93SiO 2 (in mol%) glasses sintered in an inert atmosphere, the introduction of Yb 3+ ions did not significantly change the fluorescence intensity of Ge-ODC and P-ODC, and no Yb 2+ ion fluorescence was detected. In contrast, in 4Al 2 O 3 -0.5P 2 O 5 -95.5SiO 2 (in mol%) glasses sintered in an inert atmosphere, the introduction of Yb 3+ ions significantly reduced the fluorescence intensity of Al-ODC, and meanwhile, the fluorescence of Yb 2+ ions was detected. Herein, with an increase in the GeO 2 content, Ge-ODC became the main ODC defect at the expense of Al-ODC. Therefore, Yb 2+ ions were gradually inhibited. (ODC) are easily formed during sintering in a vacuum or inert atmosphere (e.g., He). However, Kirchhof et al. found that in 6P2O5-94SiO2 and 7GeO2-93SiO2 (in mol%) glasses sintered in an inert atmosphere, the introduction of Yb 3+ ions did not significantly change the fluorescence intensity of Ge-ODC and P-ODC, and no Yb 2+ ion fluorescence was detected. In contrast, in 4Al2O3-0.5P2O5-95.5SiO2 (in mol%) glasses sintered in an inert atmosphere, the introduction of Yb 3+ ions significantly reduced the fluorescence intensity of Al-ODC, and meanwhile, the fluorescence of Yb 2+ ions was detected. Herein, with an increase in the GeO2 content, Ge-ODC became the main ODC defect at the expense of Al-ODC. Therefore, Yb 2+ ions were gradually inhibited. Figure 6a,b show the fluorescence integral intensity in the range of 950-1200 nm and fluorescence lifetime at 1020 nm of the GAY samples before and after irradiation. Before irradiation, with an increase in the GeO2 content, both the fluorescence integral intensity and fluorescence lifetime of Yb 3+ ions increased slightly. The increase in the fluorescence integral intensity may be related to the oxidation of Yb 2+ ions to Yb 3+ ions. The increase in the fluorescence lifetime may be related to the low phonon energy caused by the coordination of Ge to Yb 3+ ions. After irradiation, the fluorescence integral intensity and fluorescence lifetime of all samples decreased. The fluorescence integral intensity and fluorescence lifetime of the GAY0 sample showed the largest decrease. Meanwhile, with increasing GeO2 content, the difference decreased. This result shows that Ge co-doping can effectively improve the radiation resistance of Yb/Al co-doped silica glasses. Previous studies have shown that the darkening effect of YDF is related to the valence variations of Yb 2+/3+ ions and the formation of dopant-related point defects [16,21]. Compared with Yb 2+ ions, point defects have a larger absorption cross-section and wider absorption band, and they are closer to the absorption and emission wavelengths of Yb 3+ ions; therefore, they significantly impact the spectral properties of Yb 3+ ions.
Effects of GeO2 Content on the Radiation Resistance of Yb/Al/Ge Co-Doped Silica Glasses
The Al-doped and Ge-doped silica glasses without Yb 3+ ions were prepared to eliminate the contribution of Yb 2+ ion absorption to the RIA (see Figure 7a,b). The cumulative fitted peaks were consistent with the observed RIA. The RIA spectrum of the Al-doped sample was decomposed into six Gaussian components with peaks at 2.2, 2.9, 4.2, 4.8, 5.1, and 5.8 eV. These bands can be attributed to the Al-OHC defect (≡Al-O°, 2.2 and 2.9 eV) [35], aluminum dangling bond (Al-E') defect (4.2 eV) [35], peroxy radical (POR) defect (4.8 Previous studies have shown that the darkening effect of YDF is related to the valence variations of Yb 2+/3+ ions and the formation of dopant-related point defects [16,21]. Compared with Yb 2+ ions, point defects have a larger absorption cross-section and wider absorption band, and they are closer to the absorption and emission wavelengths of Yb 3+ ions; therefore, they significantly impact the spectral properties of Yb 3+ ions. The Al-doped and Ge-doped silica glasses without Yb 3+ ions were prepared to eliminate the contribution of Yb 2+ ion absorption to the RIA (see Figure 7a,b). The cumulative fitted peaks were consistent with the observed RIA. The RIA spectrum of the Al-doped sample was decomposed into six Gaussian components with peaks at 2.2, 2.9, 4.2, 4.8, 5.1, and 5.8 eV. These bands can be attributed to the Al-OHC defect (≡Al-O • , 2.2 and 2.9 eV) [35], aluminum dangling bond (Al-E') defect (4.2 eV) [35], peroxy radical (POR) defect (4.8 eV) [35], Al-ODC defect (5.1 eV), and silicon dangling bond (Si-E') defect (5.8 eV) [36]. The RIA spectrum of the Ge-doped sample was decomposed into five Gaussian components with peaks at 4.6, 5.1, 5.6, 5.8, and 6.4 eV. These bands can be attributed to the Ge(1) defect (4.6 eV) [37], Ge-ODC (5.1 eV) [38], Ge(2) defect (5.6 eV) [39], Si-E' defect (5.8 eV) [36], and germanium dangling bond (Ge-E') defect (6.4 eV) [40]. The RIA intensity of the Ge-ODC defects was negative, which means that the content of Ge-ODC decreased after irradiation. The RIA intensities of the other defect centers were all positive, which means that the contents of these defects increased after irradiation.
tion. The RIA intensities of the other defect centers were all positive, which means that the contents of these defects increased after irradiation. Figure 7c,d show the CW-EPR spectra of the Al-doped and Ge-doped silica glasses after irradiation. The CW-EPR spectrum of the Al-doped sample was decomposed into three parts, corresponding to the POR, Si-E', and Al-OHC defect centers. Theoretically, owing to the hyperfine coupling interaction between the holes and the magnetic core 27 Al (I = 5/2, NA~100%), six hyperfine lines can be observed at each g component (g1, g2, g3, 2 I + 1 = 6). However, since the hyperfine lines of the g components are superimposed on each other, this phenomenon is difficult to observe experimentally. The CW-EPR spectrum of the Ge-doped samples was decomposed into four parts, corresponding to the Ge-E', Ge(1), Ge(2), and Si-E' defect centers. Since the natural abundance of the magnetic core 73 Ge is only 7.76%, no hyperfine lines for the Ge-E', Ge(1), or Ge(2) defects are observed in this EPR spectrum.
Al-ODC and Ge-ODC are diamagnetic centers (no CW-EPR signal) and cannot be detected in the CW-EPR test. Although the Al-E' defect is a paramagnetic center, since its spin-lattice relaxation time is relatively long, it is usually detectable only in the scattering (non-absorption) mode and very low microwave power [36]. Therefore, the CW-EPR signal of the Al-E' defect cannot be observed in Figure 7c. Figure 8a,b show the RIA and CW-EPR spectra of the GAY samples. In contrast to the fitting results in Figure 7, the absorption of the RIA at 540 nm in Figure 8a is mainly attributed to Al-OHC, and the RIA in the ultraviolet band is mainly attributed to Ge-related defects. In Figure 8b, the signal of CW-EPR at 331 mT is also mainly attributed to Al- Figure 7c,d show the CW-EPR spectra of the Al-doped and Ge-doped silica glasses after irradiation. The CW-EPR spectrum of the Al-doped sample was decomposed into three parts, corresponding to the POR, Si-E', and Al-OHC defect centers. Theoretically, owing to the hyperfine coupling interaction between the holes and the magnetic core 27 Al (I = 5/2, NA~100%), six hyperfine lines can be observed at each g component (g 1 , g 2 , g 3 , 2 I + 1 = 6). However, since the hyperfine lines of the g components are superimposed on each other, this phenomenon is difficult to observe experimentally. The CW-EPR spectrum of the Ge-doped samples was decomposed into four parts, corresponding to the Ge-E', Ge(1), Ge(2), and Si-E' defect centers. Since the natural abundance of the magnetic core 73 Ge is only 7.76%, no hyperfine lines for the Ge-E', Ge(1), or Ge(2) defects are observed in this EPR spectrum.
Al-ODC and Ge-ODC are diamagnetic centers (no CW-EPR signal) and cannot be detected in the CW-EPR test. Although the Al-E' defect is a paramagnetic center, since its spin-lattice relaxation time is relatively long, it is usually detectable only in the scattering (non-absorption) mode and very low microwave power [36]. Therefore, the CW-EPR signal of the Al-E' defect cannot be observed in Figure 7c. Figure 8a,b show the RIA and CW-EPR spectra of the GAY samples. In contrast to the fitting results in Figure 7, the absorption of the RIA at 540 nm in Figure 8a is mainly attributed to Al-OHC, and the RIA in the ultraviolet band is mainly attributed to Ge-related defects. In Figure 8b, the signal of CW-EPR at 331 mT is also mainly attributed to Al-OHC defects, and the CW-EPR signal in the central region (334-337 mT) is mainly attributed to Ge-related defects. Figure 8a,b show that with an increase in the GeO 2 content, the RIA and CW-EPR intensity of the Al-OHC decreased significantly, and the RIA and CW-EPR intensity of the Ge-related defects gradually increased. Compared with Ge-related defects, the absorption band of Al-OHC defects is closer to the infrared band, and the extension of its absorption band can even cover 1 µm. Therefore, compared to Al-OHC, Ge-related defects have a slighter impact on the spectral properties of Yb 3+ ions. Previous studies have shown that in Yb/Al co-doped silica glasses, Yb 3+ ions are coordinated in [AlO4/2] − tetrahedra, and the following reactions occur during the irradiation process [21,25]: In Yb/Al/Ge co-doped silica glasses, the content of Ge-ODC increases with increasing GeO2 content. There are two types of Ge-ODC color centers (Ge-ODC(I) and Ge-ODC(II)), Previous studies have shown that in Yb/Al co-doped silica glasses, Yb 3+ ions are coordinated in [AlO 4/2 ] − tetrahedra, and the following reactions occur during the irradiation process [21,25]: In Yb/Al/Ge co-doped silica glasses, the content of Ge-ODC increases with increasing GeO 2 content. There are two types of Ge-ODC color centers (Ge-ODC(I) and Ge-ODC(II)), which can be converted into each other during the irradiation process. The presence of Ge-ODC was confirmed by the fluorescence spectra (see Figure 2b). Since Ge-ODC(I) has a stronger ability to capture holes than the [AlO 4/2 ] − group [41], Ge co-doping enables Ge-ODC(I) to capture more holes, thereby increasing the radiation-induced Ge-E' content, as follows: Meanwhile, the number of holes captured by the [AlO 4/2 ]group is reduced, thereby reducing the Al-OHC content. Figure 8a shows a cavity that appears in the RIA curve at 248 nm, which indicates that the Ge-ODC content decreases after irradiation. With an increase in the GeO 2 content, the intensity of the RIA curve weakens at 540 nm, which means that the Al-OHC defect content decreases. In addition, as the number of [GeO 4/2 ] 0 groups increases with increasing GeO 2 content, additional electrons are captured by the [GeO 4/2 ] 0 groups to form Ge(1) and Ge(2) color centers, and the formula is as follows: (2)) (M = Si, Al or Ge) (4) Figure 8a shows that the intensity of the RIA curve increases significantly with an increase in the GeO 2 content at 269 nm, which indicates an increase in the Ge(1) defect content. Figure 8b shows that the CW-EPR intensity of Ge(1) and Ge-E' defects increases with increasing GeO 2 content. This process suppresses the formation of other trappedelectron centers (Yb 2+ /Si-E'/Al-E').
Conclusions
Herein, the effects of Ge co-doping on the valence state of Yb 2+/3+ ions, spectral properties of Yb 3+ ions, and radiation resistance of Yb/Al/Ge co-doped silica glasses were studied systematically. For the pristine GAY samples, with an increase in the GeO 2 content, the generation of Yb 2+ ions was considerably suppressed, and the spectral properties of Yb 3+ ions were improved slightly. After X-ray irradiation, the RIA and CW-EPR spectra confirmed that the Al-OHC defects were effectively inhibited by Ge co-doping. In addition, the fluorescence integral intensity and fluorescence lifetime results also confirmed that the radiation resistance of the samples was improved with increasing GeO 2 content.
The generation and suppression mechanism of Yb 2+ ions in the pristine Yb/Al/Ge co-doped silica glasses and color centers in irradiated samples are discussed. For the pristine samples, Al-ODC trapped holes to form Al-E' defects during high-temperature sintering. With increasing GeO 2 content, Ge-ODC became the main ODC defect at the expense of Al-ODC. Thus, the relatively stable Ge-ODC inhibited the process by which Yb 3+ ions trap electrons to form Yb 2+ ions. When the Yb/Al/Ge co-doped silica glasses were irradiated, the [AlO 4/2 ] − groups trapped holes to form Al-OHC, and Yb 3+ ions trapped electrons to form Yb 2+ ions. Since Ge-ODC has a stronger ability to capture holes than [AlO 4/2 ] − groups, the formation of trapped-hole centers (Al-OHC) was inhibited. With an increase in the GeO 2 content, the increasing [GeO 4/2 ] 0 groups formed Ge(1) and Ge(2) color centers by trapping electrons. This process inhibited the formation of all other types of trapped-electron centers (Yb 2+ /Si-E'/Al-E'). This work suggests that Ge co-doping can be effective for suppressing the generation of Yb 2+ ions and improving the radiation resistance of Yb-doped silica glasses.
Data Availability Statement:
The data that support the findings of this study are contained within the article.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,659.8 | 2022-03-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Efficient anti-Prelog enantioselective reduction of acetyltrimethylsilane to (R)-1-trimethylsilylethanol by immobilized Candida parapsilosis CCTCC M203011 cells in ionic liquid-based biphasic systems
Background Biocatalytic asymmetric reductions with whole cells can offer high enantioselectivity, environmentally benign processes and energy-effective operations and thus are of great interest. The application of whole cell-mediated bioreduction is often restricted if substrate and product have low water solubility and/or high toxicity to the biocatalyst. Many studies have shown that a biphasic system is often useful in this instance. Hence, we developed efficient biphasic reaction systems with biocompatible water-immiscible ionic liquids (ILs), to improve the biocatalytic anti-Prelog enantioselective reduction of acetyltrimethylsilane (ATMS) to (R)-1-trimethylsilylethanol {(R)-1-TMSE}, which is key synthon for a large number of silicon-containing drugs, using immobilized Candida parapsilosis CCTCC M203011 cells as the biocatalyst. Results It was found that the substrate ATMS and the product 1-TMSE exerted pronounced toxicity to immobilized Candida parapsilosis CCTCC M203011 cells. The biocompatible water-immiscible ILs can be applied as a substrate reservoir and in situ extractant for the product, thus greatly enhancing the efficiency of the biocatalytic process and the operational stability of the cells as compared to the IL-free aqueous system. Various ILs exerted significant but different effects on the bioreduction and the performances of biocatalysts were closely related to the kinds and combination of cation and anion of ILs. Among all the water-immiscible ILs investigated, the best results were observed in 1-butyl-3-methylimidazolium hexafluorophosphate (C4mim·PF6)/buffer biphasic system. Furthermore, it was shown that the optimum substrate concentration, volume ratio of buffer to IL, buffer pH, reaction temperature and shaking rate for the bioreduction were 120 mM, 8/1 (v/v), 6.0, 30°C and 180 r/min, respectively. Under these optimized conditions, the initial reaction rate, the maximum yield and the product e.e. were 8.1 μmol/min gcwm, 98.6% and >99%, respectively. The efficient whole-cell biocatalytic process was shown to be feasible on a 450-mL scale. Moreover, the immobilized cells remained around 87% of their initial activity even after being used repeatedly for 8 batches in the C4mim·PF6/buffer biphasic system, exhibiting excellent operational stability. Conclusions For the first time, we have successfully utilized immobilized Candida parapsilosis CCTCC M203011 cells, for efficiently catalyzing anti-Prelog enantioselective reduction of ATMS to enantiopure (R)-1-TMSE in the C4mim·PF6/buffer biphasic system. The substantially improved biocatalytic process appears to be effective and competitive on a preparative scale.
Background
Enantiopure chiral alcohols have been shown to be versatile chiral building blocks for the synthesis of chiral pharmaceuticals, agrochemicals, pheromones, flavors and liquid crystals [1,2]. Nowadays, enantiopure silicon-containing chiral alcohols are becoming increasingly attractive due to the unique physical and chemical characteristics of the silicon atom, such as its larger atomic radius and smaller electronegativity than the carbon atom [3]. Accordingly, these silicon-containing compounds play an important role not only in asymmetric synthesis and functional materials, but also in the preparation of silicon-containing drugs, such as Zifrosilone, Cisobitan and TAC-101{4-[3,5-bis(trimethylsilyl)benzamido] benzoic acid}, which possess greater pharmaceutical activity, higher selectivity and lower toxicity than their carbon counterparts [4][5][6]. The economic preparation of enantiopure chiral alcohols through asymmetric reduction of prochiral ketones has been proved to be a reliable, scalable and straightforward route by a number of studies [7][8][9]. When compared to conventional chemical methods, biocatalytic asymmetric reductions using isolated enzymes or whole cells as biocatalyst, can offer high enantioselectivity, environmentally benign processes and energy-effective operations and thus of great interest [10]. The major advantages of using whole cells rather than isolated enzymes as biocatalysts are that cells provide a natural environment for the enzymes, preventing conformational changes in the protein structure that would lead to loss of activity in non-conventional medium, and are able to efficiently regenerate the expensive cofactors [11].
In our previous study, the use of Saccharomyces cerevisiae for asymmetric reduction of acetyltrimethylsilane (ATMS) to (S)-1-trimethylsilylethanol {(S)-1-TMSE} yielded improved results in an aqueous/organic biphasic system as compared to those achieved in a monophase aqueous system [12]. However, use of conventional organic solvents in such processes may be problematic because in many cases they are toxic to the microbial cells and lead to poor operational stability. Also, they may be explosive and are usually environmentally harmful. Ionic liquids (ILs), have recently emerged as novel green solvents for a great variety of biocatalytic transformations, and are becoming more and more attractive in such applications as a promising alternative to the conventional organic solvents [13][14][15][16]. Many kinds of ILs have proven to be biocompatible with diverse microbial cells, and present many advantages for the biotransformations such as high conversion rates, high enantioselectivity, excellent operational stability and recyclability [17]. Recently, we have reported the successful synthesis of enantiopure (S)-1-TMSE with immobilized Saccharomyces cerevisiae cells by using an IL as reaction medium with markedly improved results (yield: 99.2%, product e.e. > 99.9%) [18]. To the best of our knowledge, although the biocatalytic reduction of ATMS to (S)-1-TMSE has been successfully established following the Prelog rule, the biocatalytic anti-Prelog enantioselective reduction of ATMS to (R)-1-trimethylsilylethanol {(R)-1-TMSE} using microbial cells, especially in IL-containing systems, has so far remained unexplored, with the exception of only one account we reported where the biotransformation was carried out only in the neat aqueous monophasic system [19]. The biocatalyst used in our previous study was the Candida parapsilosis CCTCC M203011 cells, which are capable of effectively catalyzing anti-Prelog stereoselective reduction of a number of carbonyl compounds, maybe due to the possession of four novel anti-Prelog stereoselective carbonyl reductases [20][21][22]. However, the substrate and the product showed the pronounced inhibitory and toxic effects on the microbial cells in the aqueous monophasic system, thus resulting in relatively lower reactant concentration and reaction efficiency [19].
In the present study, we, for the first time, report the utilization of various water-immiscible ILs (Table 1) in a two-phase system to efficiently improve the biocatalytic reduction of ATMS to (R)-1-TMSE with immobilized Candida parapsilosis CCTCC M203011 cells (Figure 1), and the examination of the effect of these ILs on the biocatalytic reaction. In this process, ATMS is reduced to enantiopure (R)-1-TMSE while converting NAD(P)H to NAD(P) + , and glucose is simultaneously oxidized to CO 2 , presumably driving the reduction reaction by regenerating NAD(P)H from NAD(P) + . Moreover, the ILbased biphasic systems can efficiently overcome the limitation of substrate and/or product inhibition often observed during the bioreduction of carbonyl compounds in a monophasic system [23,24], and consequently a high product yield can be achieved without cofactor supplements. Additionally, the efficient biocatalytic process in the presence of ILs was tested on a preparative scale and shown to be effective and competitive.
Results and discussion
Effect of various water-immiscible ILs on the anti-Prelog enantioselective reduction of ATMS to (R)-1-TMSE with immobilized Candida parapsilosis CCTCC M203011 Many studies have shown that a biphasic system is often useful in whole-cell biocatalysis if substrate and product have low water solubility or high toxicity to the biocatalyst [10,25,26]. Therefore, the cell viability of immobilized Candida parapsilosis CCTCC M203011 with and without the addition of substrate ATMS were studied in the aqueous monophasic system as well as the IL-based biphasic systems. As shown in Figure 2, the cell viability clearly decreased in the presence of substrate compared to in the absence of substrate in all reaction systems, especially in the aqueous monophasic system, suggesting 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide C 2 mimÁTf 2 N 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide C 4 mimÁTf 2 N 1-hexyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide C 6 mimÁTf 2 N N-butyl-N-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide C 4 mpyrrÁTf 2 N N-butyl-N-methylpiperidinium bis(trifluoromethanesulfonyl)imide C 4 mpipÁTf 2 N Hexyltributylphosphonium bis(trifluoromethanesulfonyl)imide that ATMS manifests substantial toxicity to immobilized Candida parapsilosis CCTCC M203011 cells. It was noted that in the presence of substrate, the cell viability was significantly higher in all the IL-based biphasic systems than in the aqueous monophasic system. Meanwhile, in the absence of substrate, the cell viability was lower in all tested IL-based biphasic systems, compared to the aqueous monophasic system. This indicates that the ILs were toxic to the cells to some extent. Furthermore, to better understand the toxic or inhibitory effects of the product, the deactivation profiles of the cells in different reaction systems in the presence of 40 mM 1-TMSE were investigated ( Figure 3). After incubation in the aqueous system with 1-TMSE for 12 h, the cells retained only 67% of their original activity, clearly showing the severe toxic or inhibitory effect of the product. However, the cells in IL-based biphasic systems retained much higher relative activity (as compared to the cells in the aqueous monophasic system) after incubation for a same period. Based on the results depicted in Figure 2 and Figure 3, the water immiscible ILs can be applied as an excellent second liquid phase in the bioreduction process, acting as a substrate reservoir and in situ extractant for the product.
Partition coefficients have been applied as an important criterion for the preliminary screening of the second phase. Higher partition coefficients between IL and buffer could effectively reduce the effect of toxic substrate and the product on the cells as well as the pronounced inhibitions of the reaction by the substrate and the product in aqueous monophasic system [25,27]. As demonstrated in Table 2, all the tested ILs possessed considerably high partition coefficients of ATMS and 1-TMSE between IL phase and buffer phase. It was noted that the partition coefficients of ATMS were significantly higher than that of 1-TMSE, due to the stronger lipophilic property of the former. Moreover, combined with the results in Table 2, Figure 2 and Figure 3, higher partition coefficient of substrate and product between two phases correlated with higher biocompatibility of IL with the biocatalysts, which furthermore has an outstanding effect on the overall process efficiency.
It has been well known that the influences of different ILs on a biocatalytic reaction mediated by different microorganisms have been found to vary widely, where the performances of biocatalysts were closely related to the kinds and combination of cation and anion of ILs [25,[28][29][30]. Therefore, a comparative study of the effect of ILs with different combination of cation and anion on the bioreduction of ATMS with immobilized Candida parapsilosis CCTCC M203011 cells was carried out in various water-immiscible ILs-based biphasic systems (Table 3). It was noted that the immobilized Candida parapsilosis CCTCC M203011 cells were capable of efficient synthesis of (R)-1-TMSE by anti-Prelog reduction of ATMS in various water-immiscible ILs-based biphasic systems with a high product e.e. of above 99%. In general, the specific reaction rate (0.55-1.25 μmol/min g cwm ) and the maximum yield (78.5-95.8%) in the IL-based biphasic systems were much higher than that in the aqueous monophasic system (0.42 μmol/min g cwm and 70.5%, respectively) under the same reaction conditions.
In the case of C n mimÁPF 6 /buffer (n = 4-7) biphasic systems, the initial reaction rate and the maximum yield obviously decreased with the elongation of the alkyl chain attached to the cation (i.e. increasing n value) ( Table 3).
These results were possibly caused by the increase in viscosity of the IL with increasing n value, which may lead to a decrease of substrate and product mass transfer rate between the two phases [13,31,32]. Additionally, both the slightly reduced partition coefficients of ATMS between IL and buffer ( Table 2) and the lowered biocompatibility of IL with immobilized Candida parapsilosis CCTCC M203011 cells (Figure 2 and Figure 3) with increasing n value could lead to this phenomenon. When the n-butyl group attached to the imidazolium cation of C 4 mimÁPF 6 was replaced by iso-butyl (iC 4 MIMÁPF 6 ), the initial reaction rate and the maximum yield obviously declined, indicating that a minor change of IL structure exerts a substantial influence on the catalyst performance. Coincidently, a decrease in cell viability and partition coefficients (Figure 2, Figure 3 and Table 1) was observed with the variation of IL structure from C 4 MIMÁPF 6 to iC 4 MIMÁPF 6 . For the C n mimÁTf 2 N/buffer (n = 2, 4, 6) biphasic systems, the change profiles of the initial reaction rate and the maximum yield with the elongation of the alkyl chain agree with those observed in the C n mimÁPF 6 / buffer (n = 4-7) media (Table 3). In the last three Tf 2 N − -based (C 4 mpyrrÁTf 2 N, C 4 mpipÁTf 2 N and P 6,4,4,4 ÁTf 2 N) biphasic systems, the bioreduction efficiency was different with change of IL cation.
On the whole, the diverse ILs showed significant but different effects on the catalytic performance of immobilized Candida parapsilosis CCTCC M203011 cells and the bioreduction reaction. And it was interesting to found that the initial reaction rates, yields and cell viability are tightly associated with the partition coefficients of the substrate (Ps) in the IL/aqueous system. Indeed, approximate linear relationship could be obtained by drawing initial reaction rates, yields and cell viability against Ps, respectively ( Figure 4). As illustrated in Figure 4, the initial reaction rates, yields and cell viability all displayed an increase trend with the rise of Ps, which indicated that the ILs affect the activity of biocatalyst by influencing the substrate concentration in the aqueous layer around the biocatalyst to a great extent. Therefore, the superior performance of the biocatalyst found in the IL-containing system could be attributed to the markedly reduced effect of toxic substrate on the cells and substrate inhibition, since the ILs could effectively extract substrate from the aqueous phase. This finding was consistent with that of Yang and Robb, in which Ps gave systematic relations with activity of mushroom tyrosinase in organic solvent/aqueous system [33].
Based on the above results, among all the waterimmiscible ILs investigated, C 4 mimÁPF 6 gave the fastest initial reaction rate and the highest yield. It should be noted that the common ILs holding PF 6 as the anion are concerned for their potential hazards because their hydrolysis may induce the release of HF [34]. In our study, Reaction conditions: 40 mM ATMS, TEA-HCl buffer (100 mM, pH 6.5)/IL volume ratio of 2/1, 20% (w/v) glucose, 0.2 g/mL cell-loaded alginate beads, 30°C, 180 r/min. a Initial reaction rate (V o ) is defined as the initial rate of the product formation in the total reaction system, expressed as the specific activity in μmol product per min per gram of cell wet mass (cwm) unless specified otherwise. b Maximum yield.
the best bioreduction results and excellent cell biocompatibility were observed in C 4 mimÁPF 6 /buffer biphasic system, which was consistent with many other published papers [10,25,35]. Hence, C 4 mimÁPF 6 was chosen as the best second phase in IL/buffer biphasic system for subsequent experiments and meanwhile several new types of water-immiscible ILs which are believed to be safer than the present one are underway for the improvement of this bioreduction process.
Effects of key variables on the biocatalytic reduction of ATMS
For a better understanding of the biocatalytic anti-Prelog reduction of ATMS to (R)-1-TMSE with immobilized Candida parapsilosis CCTCC M203011 cells performed in the C 4 mimÁPF 6 /buffer biphasic system, the effects of several crucial variables such as substrate concentration, volume ratio of buffer to IL, buffer pH, reaction temperature and shaking rate were studied systematically. As shown in Figure 5, the initial reaction rate increased with increasing substrate concentration up to 120 mM, while the maximum yield and product e.e. showed no clear variation. However, further increase in substrate concentration led to a marked drop in the initial reaction rate and the maximum yield, possibly caused by the substrate inhibition at high concentrations of substrate. Obviously, the optimal substrate concentration in the C 4 mimÁPF 6 / buffer biphasic system was 120 mM, which was much higher than that in the aqueous monophasic system (20 mM) [19].
It has been well known that the volume ratio of two phases exerts a great impact on biocatalytic reactions with whole cells in biphasic systems, influencing not only the interfacial areas but also the viability of microbial cells Initial reaction rate (µmol/min g cwm )
Yield (%)
Cell viability (%) Partition coefficients of the substrate (Ps) in the IL/aqueous system Figure 4 Effect of partition coefficients of the substrate (Ps) on the initial reaction rate, yield and cell viability. [36,37]. As illustrated in Figure 6, the volume ratio of the aqueous buffer phase to the IL phase (V aq /V IL , mL/mL) substantially affected the initial reaction rate and the maximum yield, but had little effect on the product e.e. An obvious enhancement in the initial reaction rate and the maximum yield was observed with the increase of V aq /V IL up to 8/1, possibly owing to the less chance for the cells to contact with substrate molecules dissolved in IL, thus reducing the inactivation effect of substrate and/or IL on biocatalyst. Further rise in the V aq /V IL ratio resulted in a decrease in the initial reaction rate, which may be attributed to the lower substrate concentration in the aqueous phase. So it is clear that 8/1 is the optimum V aq /V IL ratio for the reaction. Buffer pH is one of the most important parameters affecting enzyme-catalyzed reactions [23,38,39]. The great impact of buffer pH on the bioreduction of ATMS in C 4 mimÁPF 6 /buffer biphasic system was exhibited in Figure 7. Increasing buffer pH from 5.0 to 6.0 gave rise to an increase in the initial reaction rate from 3.5 μmol/ min g cwm to 8.1 μmol/min g cwm , while the maximum yield markedly increased from 85.6% to 98.6%. However, further increase in the buffer pH led to a clear drop in the initial reaction rate and the maximum product yield. Moreover, buffer pH showed little influence on the product e.e. and kept above 99.9% within the range tested, indicating that there was no problem with activity of undesired isoenzymes within this range of pH. Based on these results, it is clear that pH 6.0 is the optimal value for the bioreduction.
Reaction temperature has a crucial impact on the activity, selectivity and stability of a biocatalyst and the equilibrium of a reaction as well [23,39]. As exhibited in Figure 8, the initial reaction rate markedly boosted with increasing reaction temperature from 25°C to 35°C, since higher temperatures may accelerate the molecular collisions between the enzyme and substrate. However, the maximum product yield sharply decreased when the temperature was above 30°C, which might be due to the partial inactivation of the enzyme in the whole cells at a higher temperature. Throughout the tested range of reaction temperature, the product e.e. showed no variation (%) Figure 6 Effect of volume ratio of buffer to IL on the bioreduction of ATMS catalyzed by immobilized Candida parapsilosis CCTCC M203011 cells. Symbols: (4) the initial reaction rate; (□) the maximum product yield of GC analysis; (○) the product e.e. All products have the (R) configuration. Reaction conditions: 120 mM ATMS, various volume ratio of TEA-HCl buffer (100 mM, pH 6.5)/IL, 20% (w/v) glucose, 0.2 g/mL cell-loaded alginate beads, 30°C, 180 r/min. and kept above 99%. Therefore, the suitable reaction temperature for the bioreduction was 30°C. Shaking rate affects the diffusion and partition of substrate and the product in the biphasic reaction system and thus leads to the changes in the initial reaction rate, the maximum yield and product e.e., especially in ILcontaining systems. High viscosity of IL limits the diffusion of substrates and products to and from the active site of enzyme [40,41]. As shown in Figure 9, the initial reaction rate increased rapidly with the increase of shaking rate up to 180 r/min, suggesting that the mass transfer was the rate-limiting step. However, further increase in the shaking rate had little effects on the initial reaction rate, the maximum yield and the product e.e., indicating that 180 r/min was the optimal shaking rate for the bioreduction in the C 4 mimÁPF 6 /buffer biphasic system. Under the optimum conditions described above, the efficiency of the biocatalytic anti-Prelog stereoselective reduction of ATMS to (R)-1-TMSE with immobilized Candida parapsilosis CCTCC M203011 cells was substantially enhanced in the C 4 mimÁPF 6 -containing biphasic system compared to the neat aqueous buffer system [19] at each optimum reaction conditions, in terms of optimum substrate concentration (120 mM vs 20 mM), initial reaction rate (8.1 μmol/min g cwm vs 0.98 μmol/min g cwm ), and maximum yield (98.6% vs 96.5%). The product e.e. still kept above 99%. When the substrate concentration exceeded 20 mM, the initial reaction and the maximum yield decreased rapidly in the neat aqueous system because of the inhibitory and toxic effect of substrate and product [19]. Therefore, the use of C 4 mimÁPF 6 /buffer biphasic system, instead of aqueous monophasic system, can markedly improve the biocatalytic reduction of ATMS.
Operational stability of immobilized Candida parapsilosis CCTCC M203011 cells
For a deeper understanding on the effect of IL on the whole cell bioreduction, it is essential to make a comparative study on the operational stability of the immobilized Candida parapsilosis CCTCC M203011 cells in different systems. As shown in Figure 10, the operational stability of the immobilized cells was significantly enhanced in the C 4 mimÁPF 6 /buffer biphasic system as compared to that in the aqueous monophasic system. The immobilized cells still remained around 87% of their original activity even after being used repeatedly for 8 batches in the C 4 mimÁPF 6 /buffer biphasic system. In contrast, the relative activity of the immobilized cells was only about 31% after being re-used for the same operational batches in the aqueous monophasic system. The good biocompatibility of the IL C 4 mimÁPF 6 and its excellent solvent properties for the toxic substrate and product could partly account for the observations. The improved interactions between the IL and the carrier (calcium alginate) [42] used for the immobilization of Candida parapsilosis CCTCC M203011 cells may also result in the good operational stability of the immobilized cells in the C 4 mimÁPF 6 -containing system. The cells maybe also become coated with the IL and thus protected from the inactivation.
Preparative scale biotransformation in the C 4 mimÁPF 6based biphasic system
To show the applicability of the biocatalytic anti-Prelog reduction of ATMS to (R)-1-TMSE using immobilized Candida parapsilosis CCTCC M203011 cells in the C 4 mimÁPF 6 /buffer biphasic system, we also performed the bioreduction on a 450-mL preparative scale under the optimized reaction conditions detailed above. The reaction process was monitored by GC analysis and the product was extracted from the reaction mixture with nhexane upon the completion of the reaction. The bioreduction behavior was similar to that shown on the 4.5-mL experimental scale. In spite of being marginally lower than that obtained on the 4.5-mL scale, the isolated yield of (R)-1-TMSE (97.5%) after 12 h reaction on the 450-mL scale was much higher than that in the neat aqueous system at the identical reaction conditions, and the product e.e. was more than 99%. Furthermore, no emulsification of the IL-based biphasic system was found, so the phases could be separated readily by centrifugation. The IL could also be easily recycled, lowering the overall cost of the biocatalytic process. Hence, the biocatalytic reduction of ATMS in the presence of C 4 mimÁPF 6 exhibited an enormous potential of industrial application.
Conclusions
The efficient synthesis of (R)-1-TMSE can be successfully conducted with high yield and excellent product e. e. via the biocatalytic anti-Prelog enantioselective reduction of ATMS using immobilized Candida parapsilosis CCTCC M203011 cells in a two-phase system containing water-immiscible ILs. The diverse ILs with different combination of cation and anion, different length of the alkyl chain attached to the cation and different partition coefficients of the substrate in the IL/aqueous system all showed significant but different effects on the catalytic performance of immobilized Candida parapsilosis CCTCC M203011 cells and the bioreduction reaction. Among the examined ILs, the IL C 4 mimÁPF 6 boosted markedly the reaction efficiency of the bioreduction and gave the best biotransformation results, which may be due to the IL's excellent solvent property for substrate and its good biocompatibility with the microbial cells. Also, the immobilized cells manifested very good operational stability in the presence of C 4 mimÁPF 6, as supported by the observation that it still maintained a much higher activity in the C 4 mimÁPF 6 -based system than that in the neat aqueous system after successive re-use for 8 batches (87% vs 31%). Moreover, the results described here clearly show that the whole-cell biocatalytic process with C 4 mimÁPF 6 is promising for efficient synthesis of (R)-1-TMSE and feasible up to a 450-mL preparative scale. Table 1 were purchased from Lanzhou Institute of Chemical Physics (China) and were all of over 98% purity. All other chemicals were also from commercial sources and were of analytical grade.
Cultivation and immobilization of Candida parapsilosis CCTCC M203011 cells
Candida parapsilosis CCTCC M203011 cells were cultivated and then immobilized using calcium alginate entrapment method according to our previous study [43]. In brief, a homogenous cell/sodium alginate suspension was firstly prepared and then added dropwise by an injector to a gently stirred CaCl 2 solution (2%, w/v), where the calcium alginate beads were precipitated.
Erlenmeyer flask capped with a septum. Alginate beads were prepared that were loaded with 31% (w/w) Candida parapsilosis CCTCC M203011 cells {based on cell wet mass (cwm)}and 0.2 g of these cell-loaded alginate beads were added per mL of the aqueous phase, together with 20% (w/v) glucose. The reaction mixture was preincubated in a water-bath shaker at 180 r/min and various temperatures (25-45°C) for 15 min. Then, the reactions were initiated by adding ATMS at various concentrations (20-160 mM, based on the volume of the IL phase). Aliquots (10 μL) were withdrawn at specified time intervals from the IL phase and the aqueous phase, respectively, and the product as well as the residual substrate was extracted with n-hexane (50 μL) containing 5.6 mM n-nonane (as an internal standard), prior to GC analysis. Details of the IL used, volume ratio of buffer to IL, substrate concentration, buffer pH and reaction temperature are specified for each case.
The preparative scale biocatalytic reduction of ATMS to (R)-1-TMSE was carried out by adding 80 g of immobilized Candida parapsilosis CCTCC M203011 cells and 6 mmol of ATMS to 450 mL of the biphasic system (volume ratio: 1/8) consisted of C 4 mimÁPF 6 and TEA-HCl buffer (100 mM, pH 6.0) containing 20% (w/v) glucose at 30°C and 180 r/min.. The reaction was terminated when no substrate was detectable by GC analysis. The immobilized cells were removed by filtration, and the products were extracted from the reaction mixture with n-hexane. The product e.e. and the isolated yield were determined by GC analysis.
GC analysis
Reaction mixtures were analyzed according to the GC analysis method in our previous work [18]. The retention times for ATMS, n-nonane, (R)-1-TMSE and (S)-1-TMSE were 3.53 min, 6.09 min, 6.74 min and 7.35 min, respectively. The average error for this determination was less than 1.0%. All reported data were averages of experiments performed at least in duplicate.
Cell viability assay
The viability of immobilized Candida parapsilosis CCTCC M203011 cells was assayed after incubating the alginate-immobilized cells for 12 h in various biphasic systems consisting of water-immiscible ILs and TEA-HCl buffer (100 mM, pH 6.5) (IL/buffer volume ratio: 1/2), or TEA-HCl buffer (100 mM, pH 6.5) system with and without the addition substrate (40 mM ATMS, based on the volume of the IL phase), respectively. The beads of immobilized Candida parapsilosis CCTCC M203011 cells were withdrawn from the reaction systems and then added to 0.1 M trisodium citrate to dissolve the alginate. After this, the microbial cell suspension was diluted and stained with 0.1% Methylene Blue for 5 min. Micrographs were taken and analyzed for blue dead cells and colorless viable ones. The cell viability was expressed as the percentage of viable ones in the total cells and the values were given as mean value ± standard deviation (n = 3).
Determination of partition coefficients
Partition coefficients (K IL/aq ) were determined by dissolving 12, 24 or 36 mM ATMS or 1-TMSE, as appropriate, in each IL/buffer biphasic system (IL/buffer volume ratio: 1/2) and shaking (180 r/min) for 36 h at 30°C. The concentrations of ATMS or 1-TMSE in the IL phase and the aqueous phase were then analyzed by GC. The concentration of ATMS or 1-TMSE in each phase varied linearly with the total amount of each chemical added to the two-phase system. Then the slopes were calculated and used for the quantification of the partition coefficients of ATMS and 1-TMSE between the IL phase and the aqueous phase.
Operational stability of immobilized Candida parapsilosis CCTCC M203011 cells
In order to assess the operational stability of the cells, the re-use of the immobilized Candida parapsilosis CCTCC M203011 cells was investigated in the C 4 mimÁPF 6 /buffer biphasic system and also in the aqueous monophasic system. Initially, aliquots of the cells were added into separate screw-capped vials each containing 4.5 mL of the appropriate medium {C 4 mimÁPF 6 /TEA-HCl buffer (100 mM, pH 6.0) biphasic system (volume ratio: 1/8), or aqueous TEA-HCl buffer system (100 mM, pH 6.0)}, together with the optimal amount of ATMS and glucose for the reduction conducted in the various media. Then, the bioreductions were carried out at 30°C and 180 r/min and were repeated over 8 batches without changing the immobilized cells. Between batches, the immobilized cells were filtered off from the reaction mixture, washed twice with fresh water, and added to a fresh batch of reaction medium. The reduction activity of the cells was assayed in each batch. The relative activity of the cells employed for the first batch was defined as 100%. | 6,897.8 | 2012-08-16T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Three-dimensional numerical analysis of a dam-break using OpenFOAM
Abstract. This paper presents a 3D numerical analysis of flow field patterns in a dam break laboratory scale by applying the numerical code based on Finite Volume Method (FVM), OpenFOAM. In the numerical model the turbulence is treated with RANS methodology and the VOF (Volume of Fluid) method is used to capture the free surface of the water. The numerical results of the code are assessed against experimental data. Water depth and pressure measures are used to validate the numerical model. The results demonstrate that the 3D numerical code satisfactorily reproduce the temporal variation of these variables.
Three-dimensional numerical analysis of a dam-break using OpenFOAM
Introduction
A dam is an engineering structure constructed across a valley or natural depression to create a water storage reservoir. The fast-moving flood wave caused by a dam failure can result in the loss of human lives, great amount of property damage, and have a severe environmental impact. Therefore, significant efforts have been carried out over the last years to obtain satisfactory mathematical numerical solutions for this problem. Due to advances in computational power and the associated reduction in computational time, three-dimensional (3D) numerical models based on Navier Stokes equations have become a feasible tool to analyze the flow pattern in those days.
Sánchez-Cordero E., Gómez M., Bladé E. Three-dimensional numerical analysis of a dam-break using OpenFOAM. Analytical studies of the dam break for a horizontal channel were performed by Dressler [1]. Several numerical studies based on 2D approaches have been validated against experimental data sets as demonstrated in [2] and [3]. Two dimensional numerical models assume negligible vertical velocities and accelerations which results in a hydrostatic pressure distribution. However, when an abrupt failure of a dam happens, in which initially a high free surface gradients occurs, the hydrostatic pressure assumption is no longer valid. Three dimensional numerical models have been used to solve the structure of the flow in these areas. This document presents a 3D numerical analyze of a dam break (laboratory scale) using the numerical code based on the finite volume method (FVM) -OpenFOAM. Turbulence is treated using Reynolds-averaged Navier Stokes equations (RANS) k-ε (RNG) approach, and the volume of fluid (VOF) method is used to simulate the airwater interface. The numerical results of the code are assessed against experimental data obtained by Kleefsman et al [4]. Water depth and pressure measures are used to validate the model. The results demonstrate that the 3D numerical code satisfactorily reproduce the temporal variation of these variables.
Fluid Flow model
The governing equations for mass and momentum for the fluid flow can be expressed as [5]: where u is velocity vector field, is the pressure field, is the turbulent eddy viscosity, strain tensor ( = 1 2 ⁄ (∇ + ∇ ) , surface tension, surface curvature, and volume fraction function (between 0-1).
Free surface model
Volume of Fluid Method (VOF) is used for the analysis of free surface flow. A volume fraction indicator is used to determine the fluid contained at each mesh element. To calculate a new transport equation is introduced.
OpenFOAM imposes the third term of equation (3) called phase compression; where, = − . The density and viscosity in the domain are given by: where l and g denotes the different fluids (water and air).
Turbulence model
In the RANS equations the instantaneous variables of flow are decomposed into their time-averaged and fluctuating quantities. In this analysis k-ε (RNG) turbulence model is used due to provides an improved performance for types of flows that include flows with separation zones [6]. It is a two equation model which provides independent transport equations for both the turbulence length scale and the turbulent kinetic energy.
Initial and Boundary Conditions
In the numerical configuration of the model, the sides surrounding the experiment and the bottom are defined as wall. The top of the experimental box atmospheric pressure prevails. At the beginning of the simulation an initial water height is established, which is the initial water volume of the experiment.
Model validation -Grid convergence
In this subsection, three grid resolution values are evaluated for the grid convergence. The domain is discretized using a structured mesh made up of hexahedral elements.
The mesh sizes to be analyzed are 2, 1.5 and 1 cm. The water depth variable is chosen as the analysis due to the reliability of the measurement of its values. In order to quantify the numerical assessment quadratic mean value R 2 is used. Fig. 2 shows how the statistical value R 2 increases when the mesh size decreases in the four measurement points.
Numerical Simulation
In this study, after the mesh analysis a grid of 1 cm is used. The grid cells has been used with some narrowing towards the bottom and the walls of the tank. An Explicit 2nd order limited scheme for the convection term, Explicit second order scheme for the diffusion term, and first order Euler scheme for the transient term are used. The simulation is continued for 7 s with an automatically adapted time step using maximum CFL-numbers around 0.50.
Results and Discussion
In order to analyze the capabilities of the numerical model in the reproduction of the flow variables in a dam break, a comparison between RANS numerical simulation and experimental data was quantified by the quadratic mean value R 2 .
Water Depth
The evolution in time of water depth (H1, H2, H3, and H4) are shown in Fig. 3. A qualitative evaluation of the results shows that the 3D model configuration is able to reproduce satisfactorily the variability in time of the water depth in the four points of study. Additionally, a qualitative evaluation of the results shows that the 3D numerical model explains the variability in time of water depth (Fig. 4). The best numerical data occurs at the point denominated H4 (Fig. 4-d), this point is located inside the water tank formed at the beginning of the experiment. On the other hand, the worse numerical data occurs in the point denominated H3 (Fig. 4-c).
Conclusions
This study investigates the applicability of OpenFOAM code for the generation of flow field variables -water depth and pressure-in a dam-break laboratory scale using the RANS approach. The results demonstrate that the 3D numerical model configuration with RANS k-ε (RNG) approach can provide reliable results of the flow field in a dam-break case. Water depth values are reproduced better than pressure values by the 3D numerical model. Although the matching between the numerical solution and the physical experiment is quite promising, the application of the 3D numerical model for field-scale simulation would be computationally expensive. | 1,544.2 | 2017-01-01T00:00:00.000 | [
"Engineering"
] |
Numerical Investigation of the Formation of a Failure Cone during the Pullout of an Undercutting Anchor
Previously published articles on anchors have mainly focused on determining the pullout force of the anchor (depending on the strength parameters of the concrete), the geometric parameters of the anchor head, and the effective anchor depth. The extent (volume) of the so-called failure cone has often addressed as a secondary matter, serving only to approximate the size of the zone of potential failure of the medium in which the anchor is installed. For the authors of these presented research results, from the perspective of evaluating the proposed stripping technology, an important aspect was the determination of the extent and volume of the stripping, as well as the determination of why the defragmentation of the cone of failure favors the removal of the stripping products. Therefore, it is reasonable to conduct research on the proposed topic. Thus far, the authors have shown that the ratio of the radius of the base of the destruction cone to the anchorage depth is significantly larger than in concrete (~1.5) and ranges from 3.9–4.2. The purpose of the presented research was to determine the influence of rock strength parameters on the mechanism of failure cone formation, including, in particular, the potential for defragmentation. The analysis was conducted with the finite element method (FEM) using the ABAQUS program. The scope of the analysis included two categories of rocks, i.e., those with low compressive strength (<100 MPa) and strong rocks (>100 MPa). Due to the limitations of the proposed stripping method, the analysis was conducted for an effective anchoring depth limited to 100 mm. It was shown that for anchorage depths <100 mm, for rocks with high compressive strength (above 100 MPa), there is a tendency to spontaneously generate radial cracks, leading to the fragmentation of the failure zone. The results of the numerical analysis were verified by field tests, yielding convergent results regarding the course of the de-fragmentation mechanism. In conclusion, it was found that in the case of gray sandstones, with strengths of 50–100 MPa, the uniform type of detachment (compact cone of detachment) dominates, but with a much larger radius of the base (a greater extent of detachment on the free surface).
Introduction
Undercutting anchors are mainly used in equipment used for embedding steel structures in engineered concrete constructions [1][2][3][4]. The authors of this study, on the other hand, have used undercutting anchors for rock mass detachment in their previous studies [5,6]. This is an unconventional method of detachment, which may have applications in special situations occurring, for example, in mining operations (excavation of rescue pits, etc.) [7,8]. Previous studies on the subject have focused mainly on the development of theoretical models of the impact of anchors on the concrete medium, as well as the practice of the installation of the anchors under consideration [9,10]. The two most notable models are the traditional 45 • cone approach (ACI Method 349-85) and the concrete capacity design (CCD) method [11,12], where, in the latter, the angle of the failure surface (cone or pyramid) is 35 • [13]. The potential failure of the medium in which the anchor is fixed (which occurs In existing design guidelines, the failure zone of the medium is approximated b cone with a cone angle a equal to α = 35° [14]. This is a convenient approximation w one is mainly interested in the load capacity of the anchor (the force that causes it to out). The maximum value of the anchor pullout force occurs for a length of the gap op ing (measured along the cone's formation) equal to about 0.45 of the destruction co formation length. The further development of the fracture takes place alongside a decre in the pulling force and is not usually subjected to closer scrutiny. The destruction mo adopted in this way significantly limits the ability to estimate the potential volume of stripped blocks, which is of interest to the authors because of the performance estim of the proposed detachment technology. Detailed studies on the formation of the fai zone during anchor pullout carried out by, for example, [18,19], as well as the author [20][21][22][23], show that the failure zone significantly differs in its shape from the failure c adopted in standard studies. Depending on the strength of the medium, the trajectorie the fractures leading to the detachment of the solid (cone-like) depend significantly on solid's tensile strength and the proportion of fracture energy in so-called Mode I Mode II. To simplify the issue, for high-strength materials, the detachment surface sumes a shape similar to a paraboloid (curve 2, Figure 1), while for low-strength me the course is more complex [5]. In the latter case, in the first phase of progression, fracture penetrates at a small angle, along a parabola, and, at some point, transitions a trajectory along a hyperbola. This results in a significant increase in the extent of det ment measured from the anchor axis along the free surface of the medium [21]. In b cases (curve 1,2- Figure 1), the extent of detachment (failure zone) is significantly gre (two to three times, e.g., [20]) than what is implied by the failure zone models rec mended by the standard [14]. In contrast, [24] analyzed the effect of the diameter of In existing design guidelines, the failure zone of the medium is approximated by a cone with a cone angle a equal to α = 35 • [14]. This is a convenient approximation when one is mainly interested in the load capacity of the anchor (the force that causes it to pull out). The maximum value of the anchor pullout force occurs for a length of the gap opening (measured along the cone's formation) equal to about 0.45 of the destruction cone's formation length. The further development of the fracture takes place alongside a decrease in the pulling force and is not usually subjected to closer scrutiny. The destruction model adopted in this way significantly limits the ability to estimate the potential volume of the stripped blocks, which is of interest to the authors because of the performance estimates of the proposed detachment technology. Detailed studies on the formation of the failure zone during anchor pullout carried out by, for example, [18,19], as well as the authors of [20][21][22][23], show that the failure zone significantly differs in its shape from the failure cone adopted in standard studies. Depending on the strength of the medium, the trajectories of the fractures leading to the detachment of the solid (cone-like) depend significantly on the solid's tensile strength and the proportion of fracture energy in so-called Mode I and Mode II. To simplify the issue, for high-strength materials, the detachment surface assumes a shape similar to a paraboloid (curve 2, Figure 1), while for low-strength media, the course is more complex [5]. In the latter case, in the first phase of progression, the fracture penetrates at a small angle, along a parabola, and, at some point, transitions into a trajectory along a hyperbola. This results in a significant increase in the extent of detachment measured from the anchor axis along the free surface of the medium [21]. In both cases (curve 1,2- Figure 1), the extent of detachment (failure zone) is significantly greater (two to three times, e.g., [20]) than what is implied by the failure zone models recommended by the standard [14]. In contrast, [24] analyzed the effect of the diameter of the undercutting anchor head on the potential extent of the failure zone of the rock medium. For a fixed effective anchor depth and an equal angle of the conical undercutting anchor head, no significant effect of this parameter on the formation of the extent of the failure zone of the rock medium was found.
Another aspect that has been reported in previous studies [25,26] is the possibility of so-called radial gaps, which promote the separation of the destruction zone (destruction cone) into smaller fractions. However, the mechanism of their formation is not clear, and considering the research conducted so far, it is assumed that the microcracks existing in a given rock medium or potential layering or faulting are mainly responsible for their formation.
From the perspective of researching the proposed detachment technology, the potential presence of radial fractures is an important element, as it can facilitate the removal of detached blocks from the work zone of rescue crews, without the need for additional grinding.
On the other hand, in the numerical analyses of the issue conducted so far, which have been carried out using the FEM program ABAQUS as well as plane and, especially, axisymmetric models, it was not possible to clarify this aspect further, so an attempt was made to explore the issue of the potential generation of radial fractures (based on 3D FEM models) cause by the undercutting anchor's interaction with the rock medium.
The previously published articles on the subject were mainly limited to determining the maximum force that can be applied to an anchor so that it does not pull out, and determining the point at which such force occurs. The extent of the break-out (failure cone) was treated as a secondary matter, serving only to approximate the size of the zone of potential material failure. For the authors of the presented research results, this is a very important aspect from the perspective of predicting the volume of potentially detached rock and the actual form of detachment. This is due to the limitations of the space available for the removal of breakout products, as well as the load capacity of the available transport equipment, in the specific conditions of the application of the proposed breakout technology (such as mining or construction disasters and rescue operations, e.g., collapsed buildings after an earthquake) [27,28]. This justifies the undertaking of research on the reported subject.
In conclusion, it should be said that, given the current state of knowledge, the problem of the formation and development of radial fractures in rocks in the zone of the cone of destruction is an issue that has not yet been adequately clarified. In view of the importance of this aspect to the proposed detachment technology as well as the use of undercutting anchors in fastening technology, broader analysis is required to justify the addressal of such a topic.
The purpose of the proposed study was to determine the mechanism of the propagation of the failure zone under the action of the undercutting anchor. The study was carried out in the context of the potential formation and development of radial fractures leading to the separation of the failure cone into smaller fractions. Analyses were performed in continuous media, undisturbed by cracks or faulting, with different combinations of mechanical parameters of the medium (tensile strength, fracture energy, and the Coulomb friction coefficient between the surface of the anchor head and rock).
The results of these analyses are presented in the following section.
Assumptions for Simulation
The strength parameters of the rock medium used in the model are adequately similar to those determined for rocks subjected to anchor pull-out tests under field conditions, which was the subject of discussion in the authors' previous studies [29,30].
The mechanical parameters of the materials used in the simulation of the rock model and the anchor are shown in Tables 1 and 2. In the presented experiment, the modeled base material was sandstone. The criteria specified in the FEM models were damage criterion-"max. principal stress"-and damage evolution-"softening linear"; the medium was continuous and homogeneous.
Geometry of the Model
The 3D geometric model of the rock medium was designed in the shape of a cylinder. Due to the symmetry of the model and the way in which cross-sections are represented in ABAQUS, half of a cylinder (the plane of symmetry includes the axis of the cylinder) with dimensions corresponding to cylinder diameter D = 1400 mm and height H = 300 mm was used for calculations. Thus, a 3D geometric model was used in the analysis, as in Figure 2a The mechanical parameters of the materials used in the simulation of the rock model and the anchor are shown in Tables 1 and 2. In the presented experiment, the modeled base material was sandstone.
The criteria specified in the FEM models were damage criterion-"max. principal stress"-and damage evolution-"softening linear"; the medium was continuous and homogeneous.
Geometry of the Model
The 3D geometric model of the rock medium was designed in the shape of a cylinder. Due to the symmetry of the model and the way in which cross-sections are represented in ABAQUS, half of a cylinder (the plane of symmetry includes the axis of the cylinder) with dimensions corresponding to cylinder diameter D = 1400 mm and height H = 300 mm was used for calculations. Thus, a 3D geometric model was used in the analysis, as in Figure 2a
Boundary Conditions
Restraint was used: in the base of the model (cylinder) and the side walls up to the height of the hole, the nodes of the model were deprived of three translational degrees of freedom, that is, U1 = U2 = U3 = 0 (as in Figure 3). The proposed type of restraint results from the fact that the adopted model of the medium is a section of the half-space of the rock medium. The dimensions of this model were assumed to be so large that the restraints of the nodes of the model lack the influence of the stress field generated by the anchor and the support of the anchor puller. U1, U2, and U3 correspond to translations along the axis of the adopted coordinate system.
Boundary Conditions
Restraint was used: in the base of the model (cylinder) and the side walls up height of the hole, the nodes of the model were deprived of three translational deg freedom, that is, U1 = U2 = U3 = 0 (as in Figure 3). The proposed type of restraint from the fact that the adopted model of the medium is a section of the half-space rock medium. The dimensions of this model were assumed to be so large that straints of the nodes of the model lack the influence of the stress field generated anchor and the support of the anchor puller. U1, U2, and U3 correspond to trans along the axis of the adopted coordinate system. According to the CCD procedure, for hef = 95 mm, the potential extent of detac measured on the free rock surface was ~1.5hef = 1.5 × 95 mm = 142.5 mm. Therefore much smaller than the assumed radius of the model, which is equal to 700 mm (po lack of influence of restraints on the propagation of the destruction zone).
The analysis was conducted for the assumed forcing of the anchor movemen the Y axis of the adopted coordinate system (with the load applied to the upper sur the anchor, along the Y axis, as in Figure 4b). Using ABAQUS routines, a mesh of onal elements with a "sweep" arrangement was used in the model. C3D8R elem eight-node linear elements with reduced integration-were used to build the mes basic example, elements with global linear dimensions of 14 mm were used. To red complexity of the computational tasks, radially, the size of the elements increases ou from 5 mm to 30 mm. The model of the rock medium and anchor, discretized wit elements, is illustrated in Figure 4. According to the CCD procedure, for h ef = 95 mm, the potential extent of detachment measured on the free rock surface was~1.5h ef = 1.5 × 95 mm = 142.5 mm. Therefore, it was much smaller than the assumed radius of the model, which is equal to 700 mm (potential lack of influence of restraints on the propagation of the destruction zone).
The analysis was conducted for the assumed forcing of the anchor movement along the Y axis of the adopted coordinate system (with the load applied to the upper surface of the anchor, along the Y axis, as in Figure 4b). Using ABAQUS routines, a mesh of hexagonal elements with a "sweep" arrangement was used in the model. C3D8R elements-eightnode linear elements with reduced integration-were used to build the mesh. As a basic example, elements with global linear dimensions of 14 mm were used. To reduce the complexity of the computational tasks, radially, the size of the elements increases outward from 5 mm to 30 mm. The model of the rock medium and anchor, discretized with finite elements, is illustrated in Figure 4.
The problem was implemented as a contact one; a surface-to-surface contact was assumed with properties in the normal direction of the hard contact type and in the tangential direction of the penalty contact type, with a friction coefficient of Coulomb µ = 0.6. The problem was implemented as a contact one; a surface-to-surface contact was assumed with properties in the normal direction of the hard contact type and in the tangential direction of the penalty contact type, with a friction coefficient of Coulomb µ = 0.6.
Results of Sensitivity Analysis of the Model to the Size of the Finite Elements
The fracture analysis was carried out using the XFEM algorithm, which, according to many sources (e.g., [31]), is not very sensitive to the size of the finite element mesh elements of the model under consideration; however, in some fracture mechanics problems, it is worth carrying out a so-called sensitivity analysis of the FEM model to determine its sensitivity to the size of the finite element mesh in order to select its optimal structure.
In order to determine the sensitivity of the model to the size of the finite elements of the FEM mesh, a broader analysis of this aspect was carried out, using relevant models.
Model "A"
Elements with a global linear dimension of 14 mm were used; radially, the size of the elements increases outward starting from 5 mm to 30 mm. The number of elements on the half-diameter of the model was 22. The characteristics of the finite element mesh of the model were as follows: the number of nodes-28877 and the number of elements-26086. The result of the simulation (the distribution of maximum stress (max) and the trajectory of the crack) is illustrated in Figure 5.
Results of Sensitivity Analysis of the Model to the Size of the Finite Elements
The fracture analysis was carried out using the XFEM algorithm, which, according to many sources (e.g., [31]), is not very sensitive to the size of the finite element mesh elements of the model under consideration; however, in some fracture mechanics problems, it is worth carrying out a so-called sensitivity analysis of the FEM model to determine its sensitivity to the size of the finite element mesh in order to select its optimal structure.
In order to determine the sensitivity of the model to the size of the finite elements of the FEM mesh, a broader analysis of this aspect was carried out, using relevant models.
Model "A"
Elements with a global linear dimension of 14 mm were used; radially, the size of the elements increases outward starting from 5 mm to 30 mm. The number of elements on the half-diameter of the model was 22. The characteristics of the finite element mesh of the model were as follows: the number of nodes-28877 and the number of elements-26086. The result of the simulation (the distribution of maximum stress (max) and the trajectory of the crack) is illustrated in Figure 5.
Model "B"
Elements with a global linear dimension equal to 28 mm were used; radially, the size of the elements increases outward from 5.6 mm to 28.2 mm. The number of elements on the half-diameter of the model totaled 22. The characteristics of the finite element mesh of the model are as follows: the number of nodes-23856; the number of elements-21280 (as illustrated in Figure 6).
Model "C"
Elements with a global linear dimension of 28 mm were used; radially, the size of the elements increases outward from 11.5 mm to 57 mm. The number of elements on half the diameter of the model corresponded to 12
Model "B"
Elements with a global linear dimension equal to 28 mm were used; radially, the size of the elements increases outward from 5.6 mm to 28.2 mm. The number of elements on the half-diameter of the model totaled 22. The characteristics of the finite element mesh of the model are as follows: the number of nodes-23856; the number of elements-21280 (as illustrated in Figure 6). As a result of the analysis, it was found that the density of the finite element mesh has only a negligible effect on the formation of the trajectory of the resulting crack. However, it significantly affects the calculation time and the smoothness of the surface of the failure cone. As a consequence of the analysis, a research finite element mesh model of type "A" was selected for further study, where elements with a global linear dimension of 14 mm were used to build the model, and to reduce the computational task, radially, the size of the elements increases outwardly along the radius, starting from 5 mm to 30 As a result of the analysis, it was found that the density of the finite element mesh has only a negligible effect on the formation of the trajectory of the resulting crack. However, it significantly affects the calculation time and the smoothness of the surface of the failure cone. As a consequence of the analysis, a research finite element mesh model of type "A" was selected for further study, where elements with a global linear dimension of 14 mm were used to build the model, and to reduce the computational task, radially, the size of the elements increases outwardly along the radius, starting from 5 mm to 30 As a result of the analysis, it was found that the density of the finite element mesh has only a negligible effect on the formation of the trajectory of the resulting crack. However, it significantly affects the calculation time and the smoothness of the surface of the failure cone. As a consequence of the analysis, a research finite element mesh model of type "A" was selected for further study, where elements with a global linear dimension of 14 mm were used to build the model, and to reduce the computational task, radially, the size of the elements increases outwardly along the radius, starting from 5 mm to 30 mm. The results obtained from the detailed analysis of the issue are illustrated in Figures 11 and 12. Figure 11a,b show the distribution of resultant displacements in the failure zone and the resulting outline of the failure zone (the so-called failure cone). It can be seen from these figures that the largest displacements of the rock medium occur at the periphery of the anchor hole and decrease along the radius of the model. In addition, the appearance of a radial crack was noted in a plane almost perpendicular to the plane of symmetry of the model. Figure 11c, in turn, shows the distribution of max normal stresses σ max . A particular concentration of stresses can be seen near the top of the cracks; the radial one is particularly notable. Figure 11d illustrates the trajectories of the resulting cracks against the background of the model mesh (in an enlarged deformation scale). Figure 11a,b show the distribution of resultant displacements in the failure zone and the resulting outline of the failure zone (the so-called failure cone). It can be seen from these figures that the largest displacements of the rock medium occur at the periphery of the anchor hole and decrease along the radius of the model. In addition, the appearance of a radial crack was noted in a plane almost perpendicular to the plane of symmetry of the model. Figure 11c, in turn, shows the distribution of max normal stresses σmax. A particular concentration of stresses can be seen near the top of the cracks; the radial one is particularly notable. Figure 11d illustrates the trajectories of the resulting cracks against the background of the model mesh (in an enlarged deformation scale).
To better illustrate the obtained relationships, Figure 12 shows (without FEM mesh) the obtained outline of the failure surface (Figure 12a,b), the propagation of the radial crack (Figure12a,c), and the image of the deformation of the rock medium under the action of the anchor, which is observed in a cross-section through the axis of the anchor hole (Figure 12b). To better illustrate the obtained relationships, Figure 12 shows (without FEM mesh) the obtained outline of the failure surface (Figure 12a,b), the propagation of the radial crack (Figure 12a,c), and the image of the deformation of the rock medium under the action of the anchor, which is observed in a cross-section through the axis of the anchor hole (Figure 12b).
Effect of Changing the Mechanical Parameters of the Rock Medium
In order to extend the scope of the conducted analysis, the influence of selected mechanical parameters of the rock medium on the potential occurrence of radial fractures (cracks) in the zone of formation of the so-called failure cone was analyzed. The influences of the Coulomb friction coefficient (µ, in the contact zone of the anchor surface with the rock), tensile strength (ft), and critical fracture energy rate (Gfc) were taken into account. The results obtained are illustrated in Figures 13-16. Figure 13 shows the results of a simulation carried out with a significantly reduced value of the Coulomb friction coefficient (µ = 0.2). The smaller value of this coefficient than
Effect of Changing the Mechanical Parameters of the Rock Medium
In order to extend the scope of the conducted analysis, the influence of selected mechanical parameters of the rock medium on the potential occurrence of radial fractures (cracks) in the zone of formation of the so-called failure cone was analyzed. The influences of the Coulomb friction coefficient (µ, in the contact zone of the anchor surface with the rock), tensile strength (f t ), and critical fracture energy rate (G fc ) were taken into account. The results obtained are illustrated in Figures 13-16. Figure 13 shows the results of a simulation carried out with a significantly reduced value of the Coulomb friction coefficient (µ = 0.2). The smaller value of this coefficient than in the simulations shown in Figures 11 and 12 (µ = 0.6) did not yield a significant change in the behavior of the failure zone. Figure 13 (as well as Figures 15 and 16 in the simulations shown in Figures 11 and 12 (µ = 0.6) did not yield a significant change in the behavior of the failure zone. Figure 13 (as well as Figures 15 and 16) illustrates the deformation of the model obtained in ABAQUS program based on normalized measures of the eigenvectors of the model nodes. For the standard deformation scale of the model used (zoom × 1), the model shows trends in the deformation of the model areas and maps the cracking trajectory of the medium. Contrary to the other images (where zoom >> 1), however, the model does not allow for the depiction of the opening of the gap. In Figure 14, contrary to the other images, the scale of displacement is 1:1 (zoom = 1, which is the standard in the program). As a result, only the trajectory of the penetrating crack is visible, and the opening of the crack is not visible. To assure identical simulation conditions, the use of a reduced value of critical fracture energy rate Gfc (Gfc = 0.17 N/mm) in place of the one previously used in the simulation (Gfc = 0.355 N/mm) resulted in conditions that were conducive to the formation of an additional fracture in the anchor head area (Figure 14, detail "a"). The most remarkable changes in the fracture pattern of the rock medium were found in the case of its significantly reduced tensile strength, as shown in Figure 15. ft = 3.87 MPa was used in place of the previously used tensile strength of the rock (ft = 7.74 MPa). As a result, areas appeared (detail "a", "b" and "c" in Figure 15) with a concentration of micro cracks developing in the anchor head impact zone. The most remarkable changes in the fracture pattern of the rock medium were found in the case of its significantly reduced tensile strength, as shown in Figure 15. ft = 3.87 MPa was used in place of the previously used tensile strength of the rock (ft = 7.74 MPa). As a result, areas appeared (detail "a", "b" and "c" in Figure 15) with a concentration of micro cracks developing in the anchor head impact zone. In contrast, despite the significant increase in the simulation time, the tensile strength of the rock (ft = 15.48 MPa) in place of that used previously (ft = 7.74 MPa, Figure 11) did not result in a significant change in the development of the failure zone ( Figure 16). The analyses carried out by both the authors of the study and other researchers showed that for flat and axially symmetrical models, it is not possible to rationally explain the mechanism of the formation and propagation of radial cracks, which are of interest to the authors of the study. There were indications as to their potential existence, but the mechanism of propagation was not thoroughly understood. Figure 15 confirms these suggestions and illustrates crack nucleation and crack development, including radial cracks, showing the mechanism of destruction of the medium's structure. So far, this process has not been demonstrated in such a way.
Experimental Verification
Field tests were carried out at the "Braciszów" and "Zalas" mines [21], among others, using a dedicated mobile measuring station to pull out the anchors. The stand consisted of a frame with a socket for attaching a hydraulic cylinder (Figure 17), which was supported at three points; a hydraulic cylinder; a hydraulic pump with a pressure sensor; and a measuring computer. In Figure 14, contrary to the other images, the scale of displacement is 1:1 (zoom = 1, which is the standard in the program). As a result, only the trajectory of the penetrating crack is visible, and the opening of the crack is not visible. To assure identical simulation conditions, the use of a reduced value of critical fracture energy rate G fc (G fc = 0.17 N/mm) in place of the one previously used in the simulation (G fc = 0.355 N/mm) resulted in conditions that were conducive to the formation of an additional fracture in the anchor head area (Figure 14, detail "a").
The most remarkable changes in the fracture pattern of the rock medium were found in the case of its significantly reduced tensile strength, as shown in Figure 15. f t = 3.87 MPa was used in place of the previously used tensile strength of the rock (f t = 7.74 MPa). As a result, areas appeared (detail "a", "b" and "c" in Figure 15) with a concentration of micro cracks developing in the anchor head impact zone.
In contrast, despite the significant increase in the simulation time, the tensile strength of the rock (f t = 15.48 MPa) in place of that used previously (f t = 7.74 MPa, Figure 11) did not result in a significant change in the development of the failure zone ( Figure 16).
The analyses carried out by both the authors of the study and other researchers showed that for flat and axially symmetrical models, it is not possible to rationally explain the mechanism of the formation and propagation of radial cracks, which are of interest to the authors of the study. There were indications as to their potential existence, but the mechanism of propagation was not thoroughly understood. Figure 15 confirms these suggestions and illustrates crack nucleation and crack development, including radial cracks, showing the mechanism of destruction of the medium's structure. So far, this process has not been demonstrated in such a way.
Experimental Verification
Field tests were carried out at the "Braciszów" and "Zalas" mines [21], among others, using a dedicated mobile measuring station to pull out the anchors. The stand consisted of a frame with a socket for attaching a hydraulic cylinder (Figure 17), which was supported at three points; a hydraulic cylinder; a hydraulic pump with a pressure sensor; and a measuring computer. The analyses carried out by both the authors of the study and other researchers showed that for flat and axially symmetrical models, it is not possible to rationally explain the mechanism of the formation and propagation of radial cracks, which are of interest to the authors of the study. There were indications as to their potential existence, but the mechanism of propagation was not thoroughly understood. Figure 15 confirms these suggestions and illustrates crack nucleation and crack development, including radial cracks, showing the mechanism of destruction of the medium's structure. So far, this process has not been demonstrated in such a way.
Experimental Verification
Field tests were carried out at the "Braciszów" and "Zalas" mines [21], among others, using a dedicated mobile measuring station to pull out the anchors. The stand consisted of a frame with a socket for attaching a hydraulic cylinder (Figure 17), which was supported at three points; a hydraulic cylinder; a hydraulic pump with a pressure sensor; and a measuring computer. The stand allows for the measurement of the pull-out force of the anchor. Unlike the instruments dedicated to the implementation of the "Pull-out" test, the radius of the distribution of supports was selected to eliminate the influence of the supports on the distribution of stresses in the anchor's zone of influence, and to allow for the undisturbed development of the failure zone, with the full detachment of the so-called failure cone, on the free surface of the rock.
In the "Braciszów" mine, the anchor was embedded (and pulled out) in sandstone with a uniaxial compression strength of f c = 97.4 MPa and a tensile strength of f t = 6.2 MPa. At the "Zalas" mine, the anchor was embedded (and pulled out) in porphyry with a uniaxial compressive strength of f c = 106.5 MPa and a tensile strength of f t = 5.9 MPa. Examples of the results of the stripping of the rock solids are illustrated in Figure 18.
with a uniaxial compression strength of fc = 97.4 MPa and a tensile strength of ft = 6.2 MPa. At the "Zalas" mine, the anchor was embedded (and pulled out) in porphyry with a uniaxial compressive strength of fc = 106.5 MPa and a tensile strength of ft = 5.9 MPa. Examples of the results of the stripping of the rock solids are illustrated in Figure 18.
Closer analysis of the field results showed that in addition to the predominant detachment forms of the rock masses in a uniform cone-like form [5][6][7][8]29], there are also disjointed forms, as in Figure 18. The above forms of failure zone reflect very well the failure mechanism observed in the numerical analysis (e.g., Figure 12). Figure 18a clearly illustrates the existence of radial cracks with a course similar to that observed in the numerical analysis, positively verifying the results of this analysis.
Field tests ( Figure 18) confirmed that the loosening of hard rocks with high tearing strength and high Gfc fracture energy with the undercut anchor proceeds rapidly, with radial fractures appearing each time. Ultimately, this leads to the obligatory fragmentation of the "cone of destruction". This is a clear confirmation of the results of the FEM numerical analysis (Figure 14), where a similar failure mechanism was found. In such a case, the impact of the anchor causes a slight growth of micro-cracks within its head and a very rapid development of the surface of the "cone" of damage, as well as a rapid expansion of the radial crack, leading to detachment according to curve "1" (Figure 1) alongside the rapid (almost explosive) obliteration of the cone of damage into smaller fractions. In the case of rocks of low and medium strength and relatively low fracture energy (e.g., "Brenna" mining), field research [21] showed that the process of generating the failure cone is passive, with the intensive development of micro-cracks in the area of action of the anchor head and the slow development of the surface of the "cone" of destruction (the course according to curve "2" (Figure 1). The mechanism of weak rock destruction observed in field conditions positively verifies the results of the numerical analysis presented in Figure 15, wherein the rapid generation of micro-cracks in the area of the action Closer analysis of the field results showed that in addition to the predominant detachment forms of the rock masses in a uniform cone-like form [5][6][7][8]29], there are also disjointed forms, as in Figure 18.
The above forms of failure zone reflect very well the failure mechanism observed in the numerical analysis (e.g., Figure 12). Figure 18a clearly illustrates the existence of radial cracks with a course similar to that observed in the numerical analysis, positively verifying the results of this analysis.
Field tests ( Figure 18) confirmed that the loosening of hard rocks with high tearing strength and high G fc fracture energy with the undercut anchor proceeds rapidly, with radial fractures appearing each time. Ultimately, this leads to the obligatory fragmentation of the "cone of destruction". This is a clear confirmation of the results of the FEM numerical analysis (Figure 14), where a similar failure mechanism was found. In such a case, the impact of the anchor causes a slight growth of micro-cracks within its head and a very rapid development of the surface of the "cone" of damage, as well as a rapid expansion of the radial crack, leading to detachment according to curve "1" (Figure 1) alongside the rapid (almost explosive) obliteration of the cone of damage into smaller fractions. In the case of rocks of low and medium strength and relatively low fracture energy (e.g., "Brenna" mining), field research [21] showed that the process of generating the failure cone is passive, with the intensive development of micro-cracks in the area of action of the anchor head and the slow development of the surface of the "cone" of destruction (the course according to curve "2" (Figure 1). The mechanism of weak rock destruction observed in field conditions positively verifies the results of the numerical analysis presented in Figure 15, wherein the rapid generation of micro-cracks in the area of the action of the anchor head in the initial stage of the head load was clearly observed. Unlike in hard rocks, the process of the formation of the "cone" of failure is slow, leading to a large degree of loosening on the free surface of the rock medium. At the same time, the deformation of the rock (δ, Figure 12) in the case of compacts, which was observed in field tests, was significantly smaller than in rocks of low strength (of the gray sandstone type, as in "Brenna" mining).
Discussion
Numerical modeling using methods such as the finite element method (FEM) [32][33][34][35][36], boundary element method (BEM) [37,38], and the application of artificial intelligence (AI) methods [39][40][41][42][43][44] is very widely used in engineering sciences, especially for analyzing the behavior of materials and structures [45][46][47][48]. These methods, combined with experimental studies, enable research into understanding the actual behavior of engineering structures in an effort to achieve their further optimization [49][50][51]. The research conducted to date has mainly focused on analyses leading to an understanding of concrete failure mechanisms [52][53][54][55], the development of methods for estimating anchor load ca-pacity [56][57][58], the effect of anchor design on the pullout force [18,59], and the influence of the technological parameters of anchorage systems [60,61] on the ability of anchors to carry specific loads, including the extent of the breakout area on the free surface of concrete. The results of the analysis showed that in homogeneous materials, it is possible to generate a radial fissure that divides the zone of destruction (cone of detachment) into smaller fractions. The analysis shows that such a crack is oriented perpendicularly to the axis of the considered rock model. Compared to previous results obtained based on partial FEM models [14,18,19], the currently presented results provide new information as to the potential development of the failure zone under the action of the undercutting anchor. So far, it has been suggested that the detachments have a predominantly mono-particle form similar to the cone of destruction [5,8,62], as illustrated in Figure 1, and that the possible separation of the zone of destruction into finer fractions (observed in field studies [21]) is the result of potential disturbances in the homogeneity or continuity of the structure, which are naturally occurring in rock media. The results of the analysis presented herein are consistent with those presented in the literature. According to a number of sources (e.g., [19,63]), there is a potential or the occurrence of radial gaps due to stress distribution (tension) in the upper part of the model in the area of the anchor hole (in the free surface zone), but this aspect was not analyzed further. The present analysis further showed that the lower tensile strength of the rock promotes the generation of a number of micro cracks in the area of the undercutting head. The results of the analysis in this area are consistent with a number of studies [64][65][66][67][68], which, for example, noted that pulling out anchors with small head diameters leads first to the local crushing of the concrete (micro-cracking) under the head and, only at a later stage, to the detachment of a larger element (the so-called cone of failure). It was also found [66] that in the case of the interaction of anchors with smaller heads, the detachment of the failure cone is mainly caused by concrete failure under tension (circumferential cracking) rather than under compression (the undercutting anchors modeled in this article are more likely to be included in this group of anchors). Numerical studies show that this phenomenon intensifies for rocks of lower strength. It was also observed that there are greater displacements/deformations in the contact zone with the anchor head.
Gontarz et al. [30] and the authors of the present research have shown in previous publications [15,20] the limitations of the ABAQUS algorithm, which enable the generation of the full fracture trajectory (until reaching the free surface of the model). It is only reliable to analyze the parameters of the destruction zone in the initial range of crack development because, after this point, the algorithm cannot correctly determine the direction of crack propagation at its tip due to the appearance of the second cracking mode. As a result, the current calibration of the numerical model set by previous reviewers based on the results of field research is not very precise. Regarding the results of the field tests, detailed estimations were made on the relationships between, for example, the effective embedment depth or the range of detachments (the radius of the base of the failure cone), which were described, for example, in [5,11,21,29]. A potential correction of the calculation algorithm made by the manufacturer in the subsequent versions of the program will enable a full analysis of the issue.
Conclusions
The numerical analysis showed that the formation of radial fractures should be regarded as a natural process accompanying the action of the undercutting anchor on the rock medium (including homogeneous rock), which can lead to the fragmentation of the detached solids (in a cone-like form). The field tests confirmed the existence of a failure mechanism derived from the FEM analysis, including, in particular, the existence of the possibility of radial fractures, leading to the dismemberment of the so-called cone of destruction into finer fractions.
The general premises related to the implementation of the breakout process with the use of undercut anchors, which resulted from the results of the field tests and numerical simulations obtained so far, are as follows: • The strong dependence of the range of detachments, and thus the volume of the detached rocks, depends to a large extent on the strength parameters of the rock. The greater the compressive and tearing strength, the smaller the range, and vice versa.
•
Due to the fact that the range of breakouts (resulting from the angle of the so-called failure cone α) strongly depends on the effective anchorage depth (which decreases rapidly with the increase in this depth), it is rational to carry out breakouts with depths less than 100 mm.
The experimental studies and numerical simulations presented herein showed the greater susceptibility of rocks to the fragmentation of the destruction zone (through the development of radial fractures) in the case of weaker rocks and smaller anchoring depths. The determination of the optimal sets of technological parameters for anchoring and the geometrical parameters of the anchor head in the context of a given rock mass require further detailed research. Data Availability Statement: Data presented in this study is available from corresponding authors upon request. | 10,493 | 2023-02-28T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
CSV-Filter: a deep learning-based comprehensive structural variant filtering method for both short and long reads
Abstract Motivation Structural variants (SVs) play an important role in genetic research and precision medicine. As existing SV detection methods usually contain a substantial number of false positive calls, approaches to filter the detection results are needed. Results We developed a novel deep learning-based SV filtering tool, CSV-Filter, for both short and long reads. CSV-Filter uses a novel multi-level grayscale image encoding method based on CIGAR strings of the alignment results and employs image augmentation techniques to improve SV feature extraction. CSV-Filter also utilizes self-supervised learning networks for transfer as classification models, and employs mixed-precision operations to accelerate training. The experiments showed that the integration of CSV-Filter with popular SV detection tools could considerably reduce false positive SVs for short and long reads, while maintaining true positive SVs almost unchanged. Compared with DeepSVFilter, a SV filtering tool for short reads, CSV-Filter could recognize more false positive calls and support long reads as an additional feature. Availability and implementation https://github.com/xzyschumacher/CSV-Filter
Introduction
Structural variants (SVs) are a common form of genetic variant and typically refer to structural differences greater than 50 base pairs in genomes, including insertions (INSs), deletions (DELs), duplications, inversions, translocations, etc (Feuk et al. 2006).Compared to single nucleotide polymorphisms (SNPs) and small insertions and deletions (INDELs), SVs often have significant impacts on organisms (Garcia-Prieto et al. 2022).For example, large INSs or DELs may lead to changes or loss of gene function, resulting in the occurrence of genetic diseases (Sone et al. 2019).Replication or amplification of repetitive sequences can alter the copy number of genes, affecting gene expression and function (Chiang et al. 2017).Inversion and translocation events can cause rearrangements of chromosomal regions, thereby affecting genome stability and function (C Yuen et al. 2017).
The commonly used strategies for detecting SVs can be mainly classified as: Read Depth (RD) based (Klambauer et al. 2012), Split Read (SR) based (Ye et al. 2009), Discordant Read Pair (RP) based (Chen et al. 2009), de novo assembly (AS) based (Chen et al. 2014), hybrid methods based on multiple operations (Chen et al. 2016), and SV signatures for some long-read based callers (Heller andVingron 2019, Jiang et al. 2020).
Current SV detection tools usually yield a substantial number of false positive calls due to the repetitive nature of the human genome, the limitations of existing sequencing technologies and alignment algorithms.To solve this problem, researchers usually filter the results of SV detection to enhance overall accuracy.Existing approaches for SVs filtering involve manual screening with visualization tools such as integrative genomics viewer (IGV) (Robinson et al. 2011), svviz (Spies et al. 2015), Samplot (Belyeu et al. 2021), etc., or the use of heuristic filters with manually selected parameters.These methods are often time-consuming and require expert guidance to determine the appropriate parameters (Liu et al. 2021).Therefore, it is necessary to develop an efficient SV filtering tool to filter the detection results.
Recently, deep learning has applied as a new approach for variant calling (Walsh et al. 2021).DeepVariant (Poplin et al. 2018) utilizes convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to model and forecast sequencing data, enabling precise identification of SNPs and INDELs.Clair3 (Zheng et al. 2022) combines deep learning with traditional statistical models to detect single nucleotide variants (SNVs) and INDELs.However, DeepVariant and Clair3 can only detect small-scale variants like SNPs, SNVs, or INDELs.DeepSVFilter (Liu et al. 2021) is a deep learning-based SV filtering tool.It maps input genomic data into images through feature extraction and subsequently employs CNNs and RNNs to learn the mapping relationship from features to SVs.This process enables the filtering of potential SV candidates, thereby reducing false positive SV calls, but DeepSVFilter can only filter results generated by SV detection tools for short reads.
The third-generation sequencing is characterized by long read length and high error rate (Jackman et al. 2018).The long read length facilitates the detection of large-scale genomic variants, while the high error rate increases the risk of generating false positive calls during variant detection, making it necessary to develop specialized SV detection algorithms for long reads.Some SV detection tools for long reads have been developed, including PBSV (Pacific Biosciences 2021), Sniffles2 (Sedlazeck et al. 2018), SVIM (Heller and Vingron 2019), cuteSV (Jiang et al. 2020), SVision (Lin et al. 2022), SVcnn (Zheng and Shang 2023), cnnLSV (Ma et al. 2023), etc.Although these third-generation SV detection tools have made great strides, they still suffer from the large number of false positive calls (Kosugi et al. 2019).The SV detection tools for long reads also require proper filtering methods.
In this article, we developed CSV-Filter, a deep learningbased SV filtering tool for both short reads and long reads.CSV-Filter uses a novel multi-level grayscale image encoding method based on the CIGAR string in the sequence alignment information, which ensures the robust applicability to both short and long reads.We redefined the transfer learning preprocessing layers and applied image augmentation to the generated images.CSV-Filter also employs transfer learning of fine-tuning (Szegedy et al. 2016) for a self-supervised pre-trained model, which boosts the model's accuracy and generalization ability, and significantly reduces the need for large amounts of annotated data by traditional CNN models for supervised learning.Lastly, CSV-Filter utilizes mixed-precision operations to accelerate the training process and save the GPU memory footprint.Experiments show that the integration of CSV-Filter with popular SV detection tools can significantly reduce false positive SV calls for both short reads and long reads.
Materials and methods
The workflow of CSV-Filter is illustrated in Fig. 1.CSV-Filter first extracts SV information from a high-confidence SV call set and constructs an index for the alignment file (Fig. 1a).This step involves obtaining SV sites and their corresponding information, while the alignment file index construction ensures the retrieval of alignment information in subsequent operations.Subsequently, CSV-Filter selects the reads within each SV region and encodes a multi-level grayscale image for each SV site based on the CIGAR strings of the selected reads (Fig. 1b).The generated images are then transformed to meet the input requirements of the model through pre-processing layers in transfer learning (Fig. 1c).
During training, CSV-Filter employs a pre-trained self-supervised learning model and classify the corresponding images into different SV types based on the training results.Finally, CSV-Filter utilizes the trained model to filter SV detection results, and output the filtered variants (Fig. 1d).
Multi-level grayscale image encoding based on CIGAR strings
The main challenge in utilizing deep learning for variant filtering lies in encoding sequence information into image representations while preserving the original SV information as much as possible.To address this challenge, we proposed a multi-level grayscale image encoding method based on CIGAR strings.The utilization of CIGAR strings offers three distinct advantages: 1) CIGAR strings are universally present in alignment files by both short reads and long reads, making them highly versatile for diverse sequencing technologies.2) CIGAR format defines nine types of operations to represent alignment results: M (MATCH), I (INSERT), D (DELETE), N (SKIP), S (SOFT CLIP), H (HARD CLIP), P (PAD), ¼ (SEQUENCE MATCH), and X (SEQUENCE MISMATCH) (Danecek et al. 2021), which are applicable to various alignment scenarios.3) CIGAR strings contain length information that represents the relative position between the aligned reads and reference genome, including the number of inserted or deleted bases and other variant features.
Figure 2 shows the image encoding process in CSV-Filter, which can be mainly divided into three steps: 1) sites locating, 2) reads selection, and 3) images encoding.
Sites locating
CSV-Filter encodes one image for each SV site.These SV sites are extracted from high-confidence SV call set.As the highconfidence SV call set does not contain negative samples required by model training, we need to generate an appropriate number of negative samples to train and evaluate the model.
By analyzing the distribution of SV regions, we found that the lengths of SVs follow a Poisson distribution (Xiang et al. 2022).We calculated the mean and variance of SVs, and its harmonic mean was computed as the mean and variance for the negative samples.The negative samples were generated using the probability density function of the Poisson distribution, as shown in equation 1.
where λ sv and λ neg represent the mean and variance of SVs and negative samples, respectively.CSV-Filter generates negative samples iteratively.The generated sample will be dropped and regenerated if it overlaps more than half with the adjacent SVs.CSV-Filter repeats this process until a sufficient number of negative samples are obtained.After the iterations completed, CSV-Filter normalizes the outputs to guarantee that the generated samples adhere to the acceptable range.The details of this process are provided in Algorithm S1.
Reads selection
Once all SV sites are located, CSV-Filter will select corresponding reads for each site.Figure 2a illustrates this process.CSV-Filter extends forward and backward from each site by a certain distance, and selects the reads that overlap with the extended regions.
Images encoding
CSV-Filter encodes images based on the CIGAR strings included in the alignment information of selected reads.We collected a large number of alignment results from several major genome projects and made statistics on the CIGAR operations.The statistics revealed that the operations "M," "I," "D," and "S" together occupied a very high proportion (over 98%).Supplementary Figure S2 and Supplementary Table S1 show the proportion of CIGAR operations in the alignment files.Hence, we chose the most representative "M," "I," "D," and "S" operations to encode image, which can not only enhance model accuracy and data processing efficiency but also mitigate the risk of overfitting and unnecessary data redundancy.
CSV-Filter encodes one image for each SV site.CSV-Filter range (0, 255) to represent the four operations "M," "I," "D," and "S," based on the CIGAR strings values of the current read.For offset distances and operations like "N," "P," "H," "=," and "X," the corresponding grayscale values are set to 0. Following this, CSV-Filter iterates through all selected reads to generate the raw image.Finally, the raw image is normalized by stretching/compressing its x-axis and y-axis lengths to 224.This normalization ensures that the encoded images conform to the input dimensions required for the subsequent transfer learning phase.The detailed process of image encoding is provided in Algorithm S2. Figure 2c
RGB conversion
The pre-processing layer in transfer learning provides appropriate input data to facilitate effective knowledge transfer and model training.We redefined the pre-processing layer in CSV-Filter, which encompasses two aspects.Initially, it adjusts the encoded images to meet the requirements of pretrained models used in transfer learning, thereby enhancing the model's ability to extract SV features.Given that the encoded images are grayscale and sized 224 × 224, CSV-Filter converts the image data to the Python Imaging Library (PIL) format and transforms the input image to RGB mode, ensuring compliance with the pre-trained model's requirements.Subsequently, it applies random color jitter transformations to the converted RGB images to increase data diversity and mitigate data imbalance issues.At the same time, we normalize the image data to improve the model's stability and generalization ability, ensuring a consistent scale and distribution of the input data.These steps boost the model's performance and facilitate better compatibility with pretrained models.
Fine-tuning
In traditional transfer learning, the training is typically conducted with two separate components: The feature extractor and the classifier.Fine-tuning improves traditional transfer learning by training not only the classifier but also the entire model, making it more flexible and comprehensive.
CSV-Filter employs fine-tuning to further train a pre-trained self-supervised learning model for SV filtering.Fine-tuning consists of two main steps: Pre-training and finetuning.Pre-training utilizes self-supervised learning, an unsupervised learning method that designs tasks for the model to generate labels or targets from unlabeled data, thereby learning useful representations or features.Compared to conventional supervised learning, self-supervised learning does not require manual annotation and can leverage unlabeled data to address these challenges, thus overcoming the dependency on a large amount of labeled data.Self-supervised learning also exhibits strong generalization ability.By conducting selfsupervised learning on a large-scale unlabeled dataset, the model can learn generic feature representations that can be transferred and applied across various tasks and domains.This enables the model to perform well and exhibit better generalization capabilities when facing tasks with limited labeled data.
We employed Variance-Invariance-Covariance Regularization (VICReg) (Bardes et al. 2021) to regularize the output representations of the model.VICReg can address potential collapse issues during model training through three regularization terms: Variance, covariance, and invariance.Variance regularization maintains the variance of each embedding dimension above a certain threshold, preventing all inputs from mapping to the same vector.Covariance regularization reduces the covariance between pairs of embedding variables to near 0, decorrelating the variables and preventing information redundancy.Invariance regularization minimizes the distance between the embedding vectors of different views of the same image.During the fine-tuning step, we introduce negative samples to enhance the discriminative capability of the self-supervised model.Additionally, the inclusion of negative samples prevents all inputs from mapping to the same embedding during the training phase, further mitigating the risk of representation collapse.
After pre-training, the pre-trained model is further trained to adapt to the task of SV filtering.The specific steps include: importing the pre-trained model, freezing certain layers of the network, adjusting the learning rate appropriately, retraining and fine-tuning the model using the encoded image data, and iteratively optimizing the model.Through fine-tuning, the model is able to leverage the generic features learned during the pre-training step and make specific adjustments for the task of SV filtering, thereby improving the overall performance of the model.
Classification
After each training iteration, the classification layer in transfer learning utilizes the extracted features from the trained model to perform classification of SVs based on the predefined labels.It consists of attention fully connected units, fully connected units, and fully connected classification units.The attention fully connected unit is composed of three sequential operations: Attention operation, fully connected operation, and ReLU activation operation.The fully connected units include a fully connected operation and a ReLU activation operation in sequential order.The fully connected classification units include a fully connected operation and Softmax operation.We combined two attention fully connected units and one fully connected unit as a onedimensional attention residual module to accomplish feature extraction.After the above operations, the extracted features are fed into the fully connected classification units to obtain probabilities corresponding to each SV type.The classification result of the SV is determined by selecting the SV type with the highest probability value.The details of classification layer are provided in Supplementary Figure S1 and Supplementary Table S2.
Additionally, CSV-Filter adopts mixed precision operations for model training to address the issues of long training times and high GPU memory usage.For computationally intensive operations such as matrix multiplication and convolution, CSV-Filter employs low precision, thereby reducing memory usage and computational workload, and accelerating the training and inference speed.For critical steps involving gradient updates and parameter updates, which are sensitive to numerical precision, CSV-Filter still employs high precision in order to ensure the accuracy and stability of the model.Overall, adopting mixed precision reduces CSV-Filter's runtime and GPU memory usage by approximately 45% and 42%, respectively, with the model's overall accuracy almost unchanged.Experimental details are shown in Supplementary Figures S4 and S5.
Filtering SV detection results
Once the training is complete, CSV-Filter can utilize the trained model to filter the SV detection results.During this process, CSV-Filter is capable of processing the SV calls generated from both short reads and long reads.Figure 1d illustrates the main process of filtering.Initially, the SV detection tool analyses alignment sequences and generates the raw SV calls.Next, CSV-Filter extracts the corresponding SV information based on these raw SV calls.Subsequently, CSV-Filter employs the same approach to encode the SV information into images.Finally, CSV-Filter applies the trained model to filter the generated images and identify false positive SV calls.
Datasets and experimental configuration
In this study, we used two samples, HG002 and NA12878, from the NIST's Genome in a Bottle (GIAB) project (Zook et al. 2014) to evaluate the performance of CSV-Filter.The Tier 1 benchmark SV callset covers 2.51 Gbp and includes 4,199 deletions and 5,442 insertions in defined highconfidence HG002 region (Zook et al. 2020).Raw PacBio CLR, HiFi, and ONT reads were aligned to the GRCh37 using minimap2 (v2.28), pbmm2 (v1.13.1), and NGMLR (v0.2.7).Raw Illumina reads were aligned to the hs37d5 reference using BWA-MEM (Li 2013.The sample NA12878 gold standard SV set includes 3,789 deletions and 5,815 insertions.Raw PacBio CLR and Illumina reads were aligned to hg19 and GRCh38DH using BLASR v1.3.2 and BWA-MEM, respectively.The details of datasets are provided in the Supplementary data. In the experiments, we used the sample HG002 PacBio HiFi dataset for model training and accuracy assessment.We randomly selected 80% of the data as the training set and the remaining 20% as the validation and test sets.In the evaluation of CSV-Filter's filtering performance, we first tested the filtering performance of CSV-Filter on long reads.Subsequently, we compared the filtering performance of CSV-Filter with DeepSVFilter on short reads.We chose a range of quality metrics in deep learning to evaluate the performance of the model.These metrics include the Receiver Operating Characteristic (ROC), accuracy, precision, recall, F1 score, etc.The details of these metrics are provided in the Supplementary data.
CSV-Filter is implemented based on the PyTorch framework.We trained our model using the Adam optimizer (Kingma and Ba 2014).The parameters used by read alignment, SV detection, and validation tools in the experiments can be found in Supplementary data.The configuration of the server used is provided in Supplementary Table S3.
Model performance in CNN and self-supervised learning models
In order to demonstrate the discriminative accuracy of CSV-Filter, we conducted validation using 5 CNN models and 4 self-supervised models.The 5 CNN models used were MobileNet v2, ResNet34, ResNet50, ResNet50(x2) and ResNet200(x2).MobileNet v2 and ResNet models are based on the PyTorch framework and are pre-trained using the ImageNet dataset (Deng et al. 2009).With the powerful feature discriminative capabilities of the ImageNet pre-trained models, the trained models achieved classification of SVs.We first compared the discriminative performance of different types of models.Then, we discussed the impact of different depths and widths on the discriminative performance within ResNet models.Finally, we compared the impact of selfsupervised learning on model accuracy.The details of the nine models and their training process are provided in Supplementary Table S4 and Supplementary Figures S6-S11.
To evaluate the performance of CSV-Filter, we computed the metrics separately for precision, recall, and F1 score, and then obtained the macro-averaged values across them as the evaluation results in CNN models.To comprehensively assess the discriminative performance, we compared the F1 scores for each SV.The results are presented in Supplementary Tables S5-S7.From the results, CSV-Filter achieved its best performance with the ResNet50(x2) model.The model's accuracy reached 94.05%.Compared to the CNN models, CSV-Filter demonstrated performance improvements after incorporating self-supervised training.Specifically, the ResNet50(x2) model achieved a performance gain of 0.89%, and the F1 score of INS, DEL, and NEG (negative samples) reaches 96.28%, 92.81%, and 95.06% respectively.This result indicates that the self-supervised learning models with VICReg regularization exhibit stronger generalization capabilities and robustness, enabling better feature discrimination.
Figure 3 depicts the discriminative performance of the three self-supervised learning models.The ROC-AUC values for INS discrimination reached as high as 0.996, and each model's ROC-AUC values exceeded 0.9 for all three discriminations.The performance of the models further improved when the model width doubled (Supplementary Table S6).As more parameters were added, the performance declined, even slightly falling below the level of the original ResNet50 model.This indicates that increasing the model width allows the model to capture more discriminative features, thereby improving discriminative performance.With the addition of more parameters, the model may overfit during discrimination, leading to a decrease in accuracy.Considering all factors, the ResNet50(x2) model achieved a more balanced performance. CSV-Filter
Table 1 shows the performance of CSV-Filter in filtering long reads.It can be observed that the precisions increase, while the recalls do not significantly decrease for PacBio CLR, PacBio HiFi, and ONT reads before and after filtering.CSV-Filter reduces false positives while maintaining the number of true positives.Notably, for PBSV and Sniffles2 on PacBio CLR reads and PBSV on PacBio HiFi reads, CSV-Filter improved the precision by 6.23%, 4.39%, and 11.05%, respectively, while keeping the recall almost unchanged.
Figure 4 shows the F1 scores for different SV types before and after filtering.The figure shows that CSV-Filter performs better on INS variants.Additionally, its performance is negatively correlated with the accuracy of the dataset, meaning that it is more effective for datasets with lower accuracy (e.g., PacBio CLR).Both INS variants and low-accuracy datasets tend to have a higher number of false positives in their detection results.The experimental results indicate that CSV-Filter tends to perform better in scenarios with higher false positive rates.Detailed results of CSV-Filter's filtering performance on different variant types in long read data can be found in Supplementary Figures S13 and S14, and Supplementary Tables S10 and S11.
We also tested CSV-Filter's performance in the CHM13 cell line.CHM13 includes a complete end-to-end assembly, providing a high-quality human genome reference.We used Dipcall (Li et al. 2018) to generate an assembly-based SV call set on the CHM13 assembly and selected Dipcall's highconfidence regions as the "ground truth".The experiments were performed on PacBio CLR, PacBio HiFi, and ONT reads.The filtering results for different SV types are shown in S17.The experimental results show that the precision significantly increases, while the recall remains almost unchanged.Specifically, for PBSV, the precision for total SV types across the three alignment results increases by 9.47%, 14.11%, and 5.32%, respectively.This indicates that CSV-Filter can effectively support the T2T assemblies, and higher quality reference can further enhance the filtering performance of CSV-Filter.
The above results indicate that CSV-Filter has good generalizability and can filter detection results called from various long reads.Additionally, the filtering effect is more pronounced when the number of false positives in the detection results is high.
Table 3 shows the filtering performance of CSV-Filter and DeepSVFilter for deletion variants in short reads.For the detection results of DELLY, CSV-Filter improved the precision by 14.65% while keeping the recall almost unchanged.For the detection results of LUMPY, Manta, SvABA, and Cue, DeepSVFilter's precision is higher than that of CSV-Filter, but its recall significantly decreases, indicating that DeepSVFilter loses some true positives while filtering out false positives.Conversely, CSV-Filter's recall remains almost unchanged, indicating a better filtering effect.The F1 scores further support this analysis.The changes in the number of SVs before and after filtering could refer to Supplementary Table S14.
The results indicate that CSV-Filter's image encoding retains more SV information compared to DeepSVFilter.Meanwhile, the models generated by CSV-Filter exhibit a better capacity to learn the mapping relationship from features to SVs. a,b The proportion of TP numbers in the benchmark SV callset and detected SVs.
Conclusion
In this article, we proposed a novel deep learning-based SV filtering method, CSV-Filter.CSV-Filter encodes the CIGAR strings into images and adopts fine-tuning with a selfsupervised model for model training.Experiments on real datasets show that CSV-Filter has good discriminative performance and can significantly reduce false positive SV calls.It also exposes strong generalization capabilities, which could filter results for both short reads and long reads.
Although there are a lot of publicly available SV call sets, big and balanced datasets suitable for training are still very limited.Moreover, these datasets usually only contain INS and DEL types of variants.To address this issue, we can construct high-confidence simulated datasets to compensate for the lack of labeled real data.Additionally, the quality of alignment results could affect the filtering performance, because the alignment accuracy may decrease for repetitive sequences, highly polymorphic regions, or complex genomic structures, thereby affecting subsequent detection and filtering.We will consider refining alignments in these complex regions.
CSV-Filter can also support sequencing data of other species.In future work, we will train new models for different species to further enhance the generality of the models.
Figure 1 .Figure 2 .
Figure 1.The workflow of CSV-Filter.a, SV information extraction and alignment file index construction.b, Multi-level grayscale image encoding based on CIGAR strings.c, Model training and SVs classification.d, Filtering for SV detection results.
Figure 3 .
Figure 3. ROC curve of self-supervised learning models ResNet50, ResNet50x2, and ResNet200x2.a, The ROC curves for insertion discrimination.b, The ROC curves for deletion discrimination.c, The ROC curves for negative samples discrimination.
Figure 4 .
Figure 4.The F1 scores of different SV types before and after CSV-Filter filtering.The experiments were performed on the long read HG002 sample, including PacBio CLR, PacBio HiFi, and ONT reads.Hollow and solid points represent the F1 scores before and after filtering, respectively.
Table 1 .
The filtering performance of CSV-Filter for HG002 long reads.Precision, recall, and F1 score in SV calling.The bold in the table means the best results.The reads are from PacBio CLR, PacBio HiFi, and ONT of sample HG002.
Table 2 .
The filtering performance of CSV-Filter for Telomere-to-Telomere assembly of CHM13 long reads.
Table 3 .
(Popic et al. 2023)rmance of CSV-Filter for HG002 short reads.The bold in the table means the best results.The reads are from Illumina of sample HG002.a,bTheproportion of TP numbers in the benchmark SV callset and detected SVs. c Cue is designed for detecting long SVs(Popic et al. 2023), and the results in the table are for the SVs longer than 5,000 bp. | 5,797.4 | 2024-09-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Synthesized Image Reconstruction for Post-Reconstruction Resolution Recovery
Resolution recovery (RR) techniques in positron emission tomography (PET) imaging aim to mitigate spatial resolution losses and related inaccuracies in quantification by using a model of the system’s point spread function (PSF) during reconstruction or post-processing. However, including PSF modeling in fully 3-D image reconstruction is far from trivial as access to the scanner-specific forward and back-projectors is required, along with access to the 3-D sinogram data. Hence, post-reconstruction RR methods, such as the Richardson–Lucy (RL) algorithm, can be more practical. However, the RL method leads to relatively rapid noise amplification in early image iterations, giving inferior image quality compared to iterates obtained by placing the PSF model in the reconstruction algorithm. We propose a post-reconstruction RR method by synthesizing PET data by a forward projection of an initial real data reconstruction (such reconstructions are usually available via a scanner’s standard reconstruction software). The synthetic PET data are then used to reconstruct an image, but crucially now including a modeled PSF within the system model used during reconstruction. Results from simulations and real data demonstrate the proposed method improves image quality compared to the RL algorithm, whilst avoiding the need for scanner-specific projectors and raw sinogram data (as required by standard PSF modeling within reconstruction).
Synthesized Image Reconstruction for Post-Reconstruction Resolution Recovery
Laurence Vass and Andrew J. Reader , Member, IEEE Abstract-Resolution recovery (RR) techniques in positron emission tomography (PET) imaging aim to mitigate spatial resolution losses and related inaccuracies in quantification by using a model of the system's point spread function (PSF) during reconstruction or post-processing.However, including PSF modeling in fully 3-D image reconstruction is far from trivial as access to the scanner-specific forward and back-projectors is required, along with access to the 3-D sinogram data.Hence, post-reconstruction RR methods, such as the Richardson-Lucy (RL) algorithm, can be more practical.However, the RL method leads to relatively rapid noise amplification in early image iterations, giving inferior image quality compared to iterates obtained by placing the PSF model in the reconstruction algorithm.We propose a post-reconstruction RR method by synthesizing PET data by a forward projection of an initial real data reconstruction (such reconstructions are usually available via a scanner's standard reconstruction software).The synthetic PET data are then used to reconstruct an image, but crucially now including a modeled PSF within the system model used during reconstruction.Results from simulations and real data demonstrate the proposed method improves image quality compared to the RL algorithm, whilst avoiding the need for scanner-specific projectors and raw sinogram data (as required by standard PSF modeling within reconstruction).
I. INTRODUCTION
P OSITRON emission tomography (PET) imaging is a pow- erful clinical and research tool, one of its major strengths is the ability to provide quantitative values that reflect physiological or biological processes.Hindering that goal is the characteristically low spatial resolution of PET imaging, which leads to inaccurate quantification and degrades image quality [1].Among other factors, positron range contributes to the loss of spatial resolution in PET.Clinical PET imaging has relied on flourine-18 where the loss of spatial resolution due to the positron range is considered small.Yet, there are an increasing number of radionuclides which emit high-energy positrons that are useful in the clinical and research settings.For example, gallium-68-based radiopharmaceuticals have numerous clinical applications [2], but with a maximum positron range in water approximately fourfold greater than fluorine-18 positron range becomes an important contributor to poor image quality.In low-density tissues, most notably the lungs, positron range increases, and exacerbates the problem [3].Furthermore, in preclinical small animal imaging, where anatomical structures are more than 100-fold smaller than in humans, positron range can easily be the dominant factor contributing to the deterioration of spatial resolution.
Resolution recovery (RR) techniques attempt to mitigate the loss of spatial resolution.Traditionally, two main techniques have emerged both of which require knowledge of the PET scanner's point spread function (PSF): 1) incorporating the PSF within statistical iterative algorithms and 2) applying the PSF on a post-reconstruction basis.Contemporary commercial PET scanners often opt for the former approach.Estimating an accurate scanner-specific PSF is nontrivial but several techniques exist and are reviewed elsewhere [4].Notably, the vendors often use PSF kernels that are based on radionuclides with short positron ranges and model only detector blurring [5].Building PSF modeling into image reconstruction is far from trivial, to modify such PSF-based image reconstruction to explicitly account for positron range requires access to the forward and back projectors related to the scanner geometry; these are difficult to obtain limiting the application of the technique.Hence, post-reconstruction methods can be far more practical and widely applicable.An example of this latter approach is the Richardson-Lucy (RL) algorithm; a popular implementation in PET imaging [6].The RL algorithm is simpler to implement since knowledge of the reconstruction algorithm and geometry is not required.Yet, image noise can rapidly accumulate during the process resulting in unacceptably poor image quality.Indeed, noise build-up is an issue for PSF-based reconstructions where early termination and smoothing filters are typically applied as a remedy.More recently, techniques that use artificial intelligence (AI), specifically deep learning, to correct PET images for positron range effects have shown promising results [7].However, in the context of positron range correction, AI-based methods have This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/yet to be compared to existing techniques, such as PSF-based reconstruction or the RL algorithm.
In this work, we propose a novel post-reconstruction technique in which RR is embedded into a synthesized image reconstruction problem.In essence, this frames the reconstruction task as an inverse problem that favorably decelerates the reconstruction process enabling a better sequence of iterates when an early termination methodology is applied (often the case in practice).Initial findings from a 2-D simulation of a digital thorax phantom demonstrated a performance gain compared to the RL algorithm [8].Here, we evaluate the method further by investigating the influence of hyperparameters of the synthesized reconstruction algorithm, measure quantitative performance in different sized regions of interest (ROI), compare the approach in additional digital phantoms, and demonstrate the proposed method in real preclinical images.
A. Synthesized Reconstruction Theory
For PET, raw data representing the distribution of a positronemitting radiotracer measured on a scanner can be conveniently represented in the form of sinograms [9].The goal of image reconstruction is to model the mean of the PET radiotracer activity concentration distribution that best agrees with a given set of measured coincidences (e.g., sinogram data).This is obtained by maximizing a Poisson log-likelihood objective function; a robust method to obtain the solution is the maximumlikelihood expectation maximization (MLEM) algorithm where X is the system matrix, θ (k) is the image estimate at iteration k, m are the measured data (e.g., in the form of sinograms), and the denominator X T 1 is the sensitivity image.The algorithm is typically terminated early to mitigate excessive noise in the final reconstructed image.For PSF-based image reconstruction, resolution modeling can be incorporated via a shift-equivariant PSF kernel contained in a circulant matrix P, as follows: This allows the recovery of spatial resolution loss by incorporating knowledge of the PSF into the measurement process.The various components of PSF modeling can be performed in image space [10] or projection space.In practice, measurements of the PSF in PET scanners reveal they are space-variant and anistropic.In this work, the PSF kernel models positron range which is most appropriately modeled in image space [11].
The RL algorithm [12] is a post-reconstruction technique which operates in image space.An update of the RL algorithm is described by Here, θ (K) is the reconstructed image using the MLEM algorithm at the final iteration and K. θ (l) is the current image estimate.
In the proposed method, synthetic data, m syn , are generated using a virtual scanner geometry, with a synthesized forward model given by S m syn = Sθ (K) . (4) Note that these data are consistent with the system model.PSF modeling is then incorporated via P, into a new synthesized image reconstruction problem P T S T 1 P T S T m syn SPθ (l) . ( To summarize, the proposed method takes a reconstructed PET image (which has spatial resolution losses caused by positron range effects not having been modeled) and forward projects the image using a virtual scanner geometry to synthesise sinogram data.These synthetic data are then used in a reconstruction problem with consistent projectors that aims to recover the loss of resolution due to positron range.Importantly, the input image could be obtained from any PET scanner and the system matrix/scanner geometry does not need to be known; on the contrary, the geometry of the virtual scanner can be chosen by the user.
All RR methods require knowledge of the PSF kernel, but notably, MLEM with PSF modeling [see (2)] also requires knowledge of the scanner-specific system matrix, X, and the raw measured data, m.Despite apparent similarities between the proposed method and MLEM+PSF [i.e., comparing (2) and ( 5)], the proposed method only requires an image, θ (k) , and the matrix P in order to perform RR.This has a significant practical advantage, whilst offering the potential for performance comparable to the characteristics of MLEM+PSF (which is favorable to the RL method in the majority of practical use cases).
B. Data Simulation
2-D digital phantoms representing three different count levels (high, mid, and low) were simulated and taken as the ground truth.These simulations represent PET acquisitions, which for example correspond to differences in injected activity or acquisition times.The modified Sheep-Logan [13], BrainWeb [14], and Zubal thorax phantoms [15] were used.To assess the spatial resolution, a 2-D slice of a Derenzo-style microPET phantom was simulated, the rod diameters were 1.1, 1.5, 2.3, 3.1, 3.9 and 4.7 mm; a ratio of 4:1 was used for the rod to background radioactivity concentration.A 2-D Gaussian function was used to model the PSF for a gallium-68-based radiopharmaceutical; the PSF represents the contribution of positron range alone with a full width at half maximum (FWHM) of 2.9 mm [16].The measured data were simulated as follows: 1) ground truth 2-D images were blurred using the PSF model for gallium-68; 2) blurred images were forward projected using a parallel line integral model (the Radon tranform) into a sinogram with angles in the range 1 • to 180 • at 1 • intervals; and 3) random Poisson noise was added to the sinogram.
The proposed method was compared to the RL algorithm, MLEM without and with PSF modeling, defined by ( 1) and ( 2), respectively; herein, they are referred to as MLEM and MLEM+PSF.MLEM+PSF is regarded as the reference standard in this work.
For the MLEM algorithm, the system matrix, X, was the Radon transform with projection angles in the range 1 • to 180 • at 1 • intervals.The input image for the RL algorithm and the proposed method was reconstructed by MLEM (θ (K) ).Unless otherwise stated, the value of K = 64, this value was based on the experience of a typical clinical reconstruction software setting.
For the proposed method, the system matrix, S, was a discrete Radon transform.Unless otherwise stated the projection angles were in the range 1 For all methods, the number of iterations chosen was large enough to ensure convergence (typically several hundred were required; depending amongst other factors on the phantom and count level).
MATLAB version R2020b [17] was used to implement the reconstruction algorithms and perform data analyses.
C. Experimental Preclinical Data
The proposed method was qualitatively assessed on experimental preclinical data. 68Ga-THP-PAM is a bone-seeking radiopharmaceutical which accumulates in the oxyhaptite crystals of bone.The imaging protocol is described in detail elsewhere [18], here we summarize the details.Images were acquired using the Mediso Nanoscan PET/CT with attenuation and scatter correction.Approximately 1.8 MBq was intravenously injected into the tail vein of a immunocomprised mouse and data were acquired for 60 min.Images were reconstructed using MLEM with 60 iterations with an isotropic voxel size of 0.21 mm.The number of projection angles were between 1 • and 180 • at 1 • intervals.Given the preclinical data represents blurring due to the positron range in 3-D, we modeled the PSF kernel as a 3-D Gaussian function, but implemented the forward and back-projectors in 2-D.To assess whether a more accurate PSF model for the positron range would be benefical, we implemented a monoexponential function to model the PSF.Previous studies [19] have shown that the positron range of gallium-68 in high-resolution PET can be well modeled as a monoexponential with an attenuation coefficient of 0.77 mm −1 .
D. Image Evaluation
The normalized root mean square error (RMSE) was the metric used to evaluate quantitative agreement with the ground truth.RMSE can be defined in terms of the bias and standard deviation of the image; it was used to compare the performance of the different methods With the bias and standard deviation defined by where Q is the number of noise realizations, θ ref j are the pixel values of the true object, and θ (k) j is the mean reconstructed value for pixel j, at iteration k.We evaluated these metrics globally (i.e., = entire image), and in a medium-sized ROI (70 pixels) and a small ROI (6 pixels) for features of interest.The number of noise realizations were chosen to ensure results were statistically valid and varied depending on the size of the domain: for global and medium ROI Q = 10 and for the small ROI Q = 100.
A. Qualitative Comparison and RMSE
Fig. 1 shows reconstructed images for the modified Shepp-Logan digital phantom at three different simulated count levels.For each simulated count level, the top row ("MinRMSE") corresponds to the minimum RMSE achieved for each method, i.e., the best agreement with the true object.Outside of simulated images the true object is unknown; hence, the bottom row shows a more realistic case obtained after a "standard number" of iterations (= 64), typical of a clinical scanning scenario.The minimum RMSE images obtained using the RL algorithm are characterized by a noisy appearance in the lower count regimes, more so for the standard iteration images.Comparatively, the proposed method has produced images which have mitigated the noise.In fact, the proposed method has produced images that are very similar in appearance to MLEM+PSF, our reference method for RR.Indeed, the plots of RMSE as a function of iteration number reflect this observation (see Fig. 2).At high counts, all the methods achieve a comparably low minimum RMSE; albeit the RL algorithm converges in the fewest iterations.However, with increasing iterations the RMSE for the RL algorithm increases compared to the other techniques as noise begins to dominate.For mid and low-count simulations, the performance gains of the proposed method become clear.For the RL algorithm, there is a rapid deterioriation in the minimum RMSE achieved (e.g., low counts: 124% for RL versus 33% for proposed), with the proposed method exhibiting less variation in RMSE with an increasing number of iterations.Fig. 2 also demonstrates a surprising observation: the similarity of the proposed method to MLEM+PSF (the reference method).Indeed, at low counts there may be a slight performance gain using the proposed method over MLEM+PSF.The proposed method does not have access to the measured sinogram of the true object yet is able to achieve comparable performance to MLEM+PSF.These findings were replicated across all phantoms (see our initial findings in the Zubal thorax [8]).
Fig. 3 shows the results of the simulation of the Derenzo phantom at the mid-count level.The true object is shown at the top of the figure for reference.None of the methods were able to resolve the two smallest diameter rods at any count level.Although RR was achieved for the largest rods with all the applied methods, only for the proposed method and MLEM+PSF is there a noticable improvement in the 2.3-mm rods (highlighted in red on the true object).This behavior is repeated for the low-count simulations; at high counts the methods performed similarly.A unique feature of the proposed method is the virtual scanner geometry defined by the matrix, S, in (4).Consequently, the performance of the proposed method will depend on hyperparameters related to S and the initial quality of the reconstructed image.
B. Synthetic Projection Angles
Within the synthetic geometry, we varied the number of projection angles defined within system matrix S [note this also affects the synthesized data, m syn , in (3)].Fig. 4 shows the minimum RMSE achieved using the proposed method when varying the projection angles for the Zubal thorax phantom at a low-count acquisition.Values on the x-axis indicate sampling every 1 • until that value, e.g., a value of 100 • = between 1 • and 100 • in 1 • increments.The input image to the proposed method was an MLEM reconstruction terminated Fig. 2. Normalised RMSE as a function of number of iterations using various methods at different count levels for the Shepp-Logan phantom.Upper row corresponds to high-count image, middle row to a mid-count image, and bottom row to a low-count image.As the count level decreases the proposed method exhibits a relative performance gain compared to the RL algorithm.at 64 iterations and was performed with 1 • angular sampling between 1 • and 180 • (indicated by native sampling in Fig. 3).
For reference, the minimum RMSE achieved by the comparison methods are shown; these methods have no dependence on synthetic projection angles.Performance degrades when Fig. 3. RR methods applied to a simulated Derenzo phantom at the midcount level.The true object is shown at the top of the image, the red triangle indicates the 2.3-mm diameter rods.The first row represents images obtained at the minimum RMSE, the second row represents images at 64 iterations.The 2.3-mm rods are more distinguishable using the proposed method and MLEM+PSF than the RL algorithm.At the bottom, line profiles (dashed blue line is indicative of position) show the improvement of the proposed method compared to the RL method on standard iteration reconstructions.using fewer projection angles than the supplied MLEM reconstruction.There appears no improvement in RMSE beyond the original sampling used in the input image (in this case 180 • ); This behavior is consistent across different count acquisitions and phantoms.
C. Line Thickness
The virtual geometry was also modified by applying a kernel which varies the thickness of the line integral in the forward projection of the synthetic reconstruction.This was achieved by applying a spatially invariant 2-D Gaussian function to the input image, θ (K) (from the MLEM algorithm with 64 iterations).The hyperparameter in this case is σ from the Gaussian function.The 2-D Gaussian function was then modeled in the synthetic reconstruction to recover this introduced blur.Fig. 5 shows the minimum RMSE achieved as a function of σ for the Zubal thorax phantom at low counts.The synthetic projection angles were between 1 • and 180 • , matching the angular sampling used in the MLEM reconstruction used as the input image to the proposed method.As a reference the minimum RMSE achieved with the RL, MLEM, and MLEM+PSF algorithms are shown.For low-count data in the Zubal thorax phantom, there is a slight performance improvement when increasing the line thickness at the level of σ = 2 pixels, increasing the value of σ then results in a deterioration of performance.No performance gain was observed in the mid and high-count images.
D. Number of Iterations of Supplied MLEM Reconstruction
The performance of the proposed method and the RL algorithm will depend on the input image; another hyperparameter is the number of iterations, K, used in the MLEM reconstruction that supplies the input image.
Fig. 6 shows the minimum RMSE as a function of the number of iterations of the supplied MLEM input image for the Zubal thorax phantom at low counts.For reference, the minimum RMSE achieved using MLEM and MLEM+PSF reconstructions are shown; clearly these will not depend on an input image, as they operate on the sinogram data.The minimum RMSE for the RL algorithm rapidly increases between 5 and 100 iterations of the MLEM algorithm.In contrast, the proposed method is less dependent on the image quality, even when operating on relatively poor-quality input images, a low minimum RMSE is achieved.This pattern of favorable performance holds for the different count simulations: with the high-count simulation, the increase in minimum RMSE for the RL algorithm is less steep but, the proposed method has more stable dependence on the input image.
E. Variation on the Proposed Method and Bias/Standard Deviation Tradeoff
Earlier we hypothesized that the data, m syn , being consistent with the reconstruction problem would, at least partly, confer a benefit over the usual reconstruction problem with inconsistent measured projection data.We applied the synthesized reconstruction method without PSF modeling; equivalent to (5) without PSF kernel matrix, P, to investigate if there is a performance gain compared to MLEM, RL, or MLEM+PSF algorithms.
In Fig. 7, the global bias and standard deviation tradeoff is shown for the proposed method, the variation without PSF modeling (displayed as Proposed_noPSF), and the comparison methods for Zubal thorax phantom with different simulated count statistics.In the high-count simulations, the RL algorithm achieves a low bias but at increased variance compared to MLEM+PSF and the proposed method.For lower count simulations, the RL algorithm results in increasing bias and variance.In contrast, the proposed method achieves a low bias and standard deviation across different count regimes; Fig. 7 demonstrates that the MLEM+PSF algorithm produces the most desirable bias and standard deviation.However, the similarity to the proposed method is clear.The proposed method without PSF modeling exhibits improved performance over MLEM and the RL algorithm despite not modeling the PSF; this is more evident in the low-count simulations.
Besides the global values of bias and standard deviation, we calculated the values for a medium and small ROI in the Zubal thorax phantom (ROIs defined in Fig. 8).Fig. 9 shows the results in the low-count simulation.In agreement with the global values, in the medium-sized ROI the proposed method achieves a lower bias and standard deviation compared with the RL algorithm; similarly, the proposed method without PSF modeling achieves a performance gain albeit less Fig. 7. Global bias and standard deviation for Zubal Phantom at different count simulations.The graph at the top corresponds to the high-count simulation; middle is the mid count and bottom is low count.marked.Consistent with previous findings the MLEM+PSF algorithm exhibits the most desirable behavior.Nevertheless, the proposed method is strikingly similar to MLEM+PSF.
For the small ROI, the relative difference between the RL algorithm and the proposed method is less distinct.The RL algorithm demonstrates a similar bias to the proposed method but still at the cost of increased variance.The proposed method without PSF modeling produces improved values compared to MLEM.
F. Experimental Results
To assess whether the benefits of the proposed method translate beyond simulations, we applied the technique to experimentally acquired preclinical data.Representative maximum intensity projection (MIP) images of 68 Ga-THP-PAM in a mouse are shown in Fig. 10.The initial reconstructed images were obtained using a standard preclinical protocol with the native reconstruction software.The RL algorithm and proposed method are shown at low, mid, and high iterations.Several features are more readily resolved using the proposed method.Although the RL algorithm does recover resolution from the initial reconstruction it is more severely affected by noise making certain details more difficult to resolve.Fig. 11 shows the results of the proposed method with different PSF models, a Gaussian and a more realistic monoexponential, using the same mouse data as Fig. 10.Based on the results in Fig. 10 the number of iterations was chosen as 80 for the proposed method.Comparing the initial reconstruction with no resolution modeling to the proposed method using a monoexponential model it is clear that the resolution has been improved.Importantly, the monoexponetial PSF model also yields the same improvement in noise control as the Gaussian PSF model when compared to the RL algorithm.
IV. DISCUSSION
We have proposed a post-reconstruction RR method based on synthesized image reconstruction.To summarize the method: first, sinogram data are synthesized from a real data reconstruction using a virtual scanner; and second, an image is reconstructed using the synthesized sinogram and virtual scanner's system matrix.Importantly, the synthetic reconstruction incorporates PSF modeling which allows RR of the original real data reconstruction.We demonstrated the technique using 2-D PET data by modeling the degradation of spatial resolution due to the positron range.Besides outperforming the widely used RL algorithm both in terms of quantitative and qualitative metrics, our findings show comparable performance to PSF-based reconstruction.Although these promising findings are limited to recovery of positron range resolution losses, the flexibility of the method would allow it to be extended to other resolution degrading effects and even other medical imaging modalites.
this work, we benchmarked the performance of our method against the RL algorithm; given it is a well-known post-reconstruction RR method in medical imaging.In highcount simulated PET images, the RL algorithm and proposed method achieved similar minimum RMSE values and comparable image quality.However, the proposed method delivered a substantial relative performance improvement in the mid and low-count simulations.The RL algorithm produced images that were noisy and of poorer image quality than the proposed method.Indeed, this was reflected in the higher global variance of images produced by the RL algorithm.Simulation of a Derenzo phantom also revealed improvements in spatial resolution using the proposed method.Although ideally one would acquire a high number of counts in every acquisition, in practice the requirement to minimis radiation dose and scanning times mean the lower count simulations may be more realistic in many real clinical and research settings.Moreover, there are several promising radiotracers, particularly for therapeutic applications, that have a low branching ratio for positron decay [20], yielding poor count statistic images.
Beyond simulations, when using either the RL algorithm or the proposed method for RR, choosing the optimal iteration number is a challenge (since the ground truth is unknown).Yet, Fig. 2 reveals a potential additional benefit to the proposed method over the RL algorithm: a slower increase in RMSE as a function of a number of iterations.Hence, over-iterating will have less impact on image quality than for the RL algorithm.Among other factors post-reconstruction RR will be dependent on the quality of the initial reconstruction (i.e., the input image).Our results suggest that the proposed method is less dependent on a number of iterations of the initial reconstruction (taken as a surrogate of image quality) than the RL algorithm.
Our reference standard for RR was PSF-based reconstruction, specifically, the incorporation of the PSF kernel into an MLEM algorithm (MLEM+PSF).Evidently, there are similarities between the proposed method and MLEM+PSF [e.g., compare (2) and ( 5)]; however, there are subtle yet important distinctions.PSF-based reconstruction can be challenging to implement as it requires access to proprietary information; specifically, the forward and back projectors of the scanner.A benefit of the proposed method is that it can be applied without such knowledge, hence can be far more practical and widely applicable.In addition, the proposed method does not require access to the raw measured sinogram data needed for the PSF-based reconstruction.Although RR using PSFbased reconstruction yielded the best performance overall, the proposed method was often comparable; this was apparent both visually and quantitatively.In short, the proposed method has the benefits of a post-reconstruction methodology yet demonstrated comparable performance to a PSF-based reconstruction approach to RR.
A unique aspect of the proposed method is the virtual scanner geometry defined by S. In principle, an investigator is able to choose any synthetic geometry.We explored the impact of the hyperparameters of the synthesized reconstruction.Our findings suggest for parallel line projectors, there is a little additional benefit to altering the angular sampling from that used to acquire the original data.Equally, increasing the thickness of the line integral (defined by σ ) yielded only a modest improvement in minimum RMSE, but in most cases increasing σ led to poorer performance.However, it is important Images are sagittal MIPs.The input image is labeled as Initial Recon.MIP using the proposed method with a Gaussian function modeling positron range at 80 iterations (same as in Fig. 10.) is shown in the middle.On the right is the MIP at 80 iterations using the proposed method with a monoexponential function to model the positron range.RR is clearly shown using this alternative model.
to recognize that these values of σ represent extreme levels of image blurring-chosen to test the limits of the method.Additional work is required to identify the optimal geometry and its relative importance compared to the introduced resolution modeling via P.
We also created a synthesized reconstruction problem without PSF modeling and found that the bias and variance of the resulting reconstructed image was improved compared to the original MLEM image.An interesting comparison exists to nested-EM techniques: in previous work it has been shown that interleaving a standard MLEM update with an RL iteration accelerates convergence at the cost of increased variance [21].In contrast, our proposed method could be considered as a denesting of the problem, by removing resolution modeling and performing it in its own synthesized inverse problem.Indeed, we observed a decelerated convergence with performance gains.
The simulated results demonstrate the relative performance gains using the proposed method.However, in the simulated results, we primarily used the same forward and back-projectors for the synthetic reconstruction as the original MLEM reconstruction.In practice, it is unlikely we would have access to these projectors for a commercial PET scanner.Nevertheless, the anecdotal real preclinical data illustrate that the technique can be applied to an unknown geometry with improvements over the RL algorithm.In this work we have modeled only contributions due to positron range in the PSF kernel, but there are numerous other factors which contribute to the loss of spatial resolution in PET; additional considerations are needed to extend the method to account for these.Contemporary commercial scanners often include resolution modeling within their reconstruction software to account for some of these effects; however, correction for the positron range is not typically included.Moreover, positron range is an important contributor to resolution loss in preclinical imaging [20], in clinical imaging of high energy positron emitters [22], [23], and in tissues with low densities such as the lungs [3].We also demonstrated that the proposed method improves resolution of real data using a more accurate PSFmodel (a monoexpontential).However, further modifications could include using a spatially variant tissue-specific model since the positron range distribution depends on the tissue.In this work, we did not include regularization within the synthesized reconstruction, it is possible this could further improve the image quality, particularly the bias and standard deviation tradeoff, of the synthesized reconstruction.Recently deep neural networks have been used to correct PET images for positron range [7].The authors demonstrated improved noise characteristics and RR compared to PET images with no RR.Comparison of the proposed method to AI-based methods is outside the scope of this present work but could be explored in future studies.We demonstrated the feasibility of the technique in 2-D data only; further work would be required to determine if these benefits extend in a 3-D implementation of the synthetic projectors.
The proposed method is a versatile technique and has many potential applications outside of PET imaging.As a postreconstruction technique it circumvents some of the major limitations of PSF-based RR whilst mitigating the build up of noise common in the RL algorithm.We implemented only one variation of the approach (removing PSF modeling).However, we note several other potential modifications that may offer improvements in domains where the RL algorithm is commonplace; for example, in astronomical images where the nontomographic data could be synthetically reconstructed with a virtual scanner.
V. CONCLUSION
We proposed a novel post-reconstruction RR method using a synthesized image reconstruction framework.As a postreconstruction technique, the method is potentially more practical and widely applicable than building resolution modeling into image reconstruction, which is challenging without knowledge of the forward and back projectors of the PET scanner.In a variety of digital phantoms and in real preclinical PET data the method outperformed the RL algorithm-a widely used post-reconstruction RR method.The relative performance gains increased with lower count acquisitions.Remarkably the RMSE and image quality were comparable to PSF-based MLEM reconstruction despite the method having no access to the original measured sinogram data.
Fig. 1 .
Fig.1.RR methods at different count simulations for the modified Shepp-Logan phantom.The true object is shown at the top.For each count simulation (either high, mid, or low) the top row corresponds to images obtained at the minimum RMSE, the bottom row corresponds to the images at 64 iterations.
Fig. 4 .
Fig. 4. Minimum RMSE as a function of the number of projection angles in the synthetic geometry for the Zubal thorax phantom at low counts.For reference, the minimum RMSE is shown for the comparison methods.
Fig. 5 .
Fig. 5. Minimum RMSE as a function of line thickness in synthesized reconstruction for the Zubal thorax phantom at low counts.The comparison methods are shown for reference.
Fig. 6 .
Fig. 6.Minimum RMSE as a function of iterations used in the supplied MLEM reconstruction.Both the proposed method and the RL algorithm varying with input image quality.MLEM and MLEM+PSF are shown for reference.
Fig. 8 .
Fig. 8. ROI definitions for the Zubal Phantom.A medium ROI of 60 pixels was delineated around the high contrast region indicated and a small ROI of 6 pixels was delineated adjacent.
Fig. 9 .
Fig. 9. Bias and standard deviation for low-count simulations in a medium (top) and small ROI (bottom) of the Zubal thorax phantom.
Fig. 10 .
Fig. 10.RR on preclinical images at different iterations.Images are sagittal MIPs.The input image is labeled as Initial Recon; the proposed method is compared to the RL algorithm.Details are more clearly resolved (examples indicated by red arrows) using the proposed method.
Fig. 11 .
Fig. 11.RR on preclinical images with alternative positron range modeling.Images are sagittal MIPs.The input image is labeled as Initial Recon.MIP using the proposed method with a Gaussian function modeling positron range at 80 iterations (same as in Fig.10.) is shown in the middle.On the right is the MIP at 80 iterations using the proposed method with a monoexponential function to model the positron range.RR is clearly shown using this alternative model. | 7,889 | 2023-02-22T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Excitations of attractive 1-D bosons: Binding vs. fermionization
The stationary states of few bosons in a one-dimensional harmonic trap are investigated throughout the crossover from weak to strongly attractive interactions. For sufficient attraction, three different classes of states emerge: (i) N-body bound states, (ii) bound states of smaller fragments, and (iii) gas-like states that fermionize, that is, map to ideal fermions in the limit of infinite attraction. The two-body correlations and momentum spectra characteristic of the three classes are discussed, and the results are illustrated using the soluble two-particle model.
I. INTRODUCTION
In recent years, ultracold atoms have become a flexible tool for the simulation of fundamental quantum systems [1,2,3]. Their versatility derives mainly from the fact that both their external forces and atomic interactions can be designed to a great extent. One striking example is the possibility to confine the atoms' motion to lower dimensions, such as in the one-dimensional (1D) Bose gas [3]. Since, pictorially speaking, particles moving on a line cannot move around each other, they are in a sense more strongly correlated than their higherdimensional counterparts. Moreover, their effective interaction strength can be tuned freely so as to enter the interesting regime of strong interactions, either via Feshbach resonances of the 3D scattering length [4] or through confinementinduced resonances of the effective 1D coupling [5].
The case of repulsive interactions has long received considerable attention, mostly for the striking feature that, in the hard-core limit of infinite repulsion between the bosons, the system maps to an ideal Fermi gas [6]. In this fermionization limit, the bosons become impenetrable, which has a similar effect as Pauli's exclusion principle for identical fermions. The seminal exact solutions derived for special systems at arbitrary interaction strength-such as the homogeneous Bose gas on a ring in the thermodynamic limit [7,8] as well as for definite particle numbers [9], and for the inhomogeneous gas in a hard-wall trap [10]-have recovered this borderline case in the limit of infinite coupling. The thermodynamic nature of the fermionization crossover from weak to strong repulsion has first been explored and contrasted with the complementary Thomas-Fermi regime [11,12], and its microscopic mechanism has been unraveled by recent numerically exact studies [13,14,15,16]. Moreover, its experimental demonstration has sparked renewed interest in 1D bosons [17,18].
By contrast, the understanding of the attractive case is more patchy. In the homogeneous system, the ground state forms an N -body bound state [19]. For sufficient attraction, it becomes ever more localized and is unstable in the thermody-namic limit. For finite systems, though, the ground state remains stable for arbitrary finite attraction, as is demonstrated by the exact solution via Bethe's ansatz for a ring [9] and a hard-wall trap [10]. Much less is known about excited states. In the homogeneous system again, Monte Carlo simulations have indicated the existence of a highly excited gas-like state for ultrastrong attraction, which can be seen as the counterpart of the fermionized ground state for repulsive interactions [20,21]. The evidence for this super-Tonks gas has been supported by a Bethe-ansatz solution [22]. Still, an intuitive understanding of these states from a microscopic perspective and how they come about in the crossover from weak interactions is still missing. The complementary crossover for the low-lying excitation spectrum in turn has been investigated recently for the homogeneous system [23]; however, it does not include the gas-like super-Tonks gas.
In this article, we study the entire crossover from the noninteracting to the strongly attractive limit for few bosons in a harmonic trap. This is done via the numerically exact multiconfiguration time-dependent Hartree method introduced in Sec. II. Section III presents the general Bose-Fermi map valid for the gas-like super-Tonks states, and illustrates its meaning on the simple model of two bosons. The numerical investigation of the stationary states in Sec. IV reveals three distinct classes for strong enough attraction: N -body bound states (Sec. IV A), states involving smaller fragments (Sec. IV B), and finally gas-like states that fermionize (Sec. IV C).
II. MODEL AND COMPUTATIONAL METHOD
Model We consider N trapped bosons described by the Hamiltonian We will focus on the case of harmonic confinement, U (x) = 1 2 x 2 (where harmonic-oscillator units are employed throughout.) The effective interaction resembles a 1D contact potential, but is mollified with a Gaussian δ σ (x) ≡ e −x 2 /2σ 2 / √ 2πσ (of width σ = 0.05) for numerical reasons (cf. [14] for details.) We concentrate on attractive forces g ∈ (−∞, 0], which can be achieved experimentally by either having negative scattering lengths or by reducing the transverse confinement length a ⊥ ≡ /M ω ⊥ sufficiently [5]. Computational method Our approach relies on the numerically exact multi-configuration time-dependent Hartree method [24,25,26], a quantum-dynamics approach which has been applied successfully to systems of few identical bosons [14,15,27,28,29] as well as to Bose-Bose mixtures [30]. Its principal idea is to solve the time-dependent Schrödinger equation iΨ(t) = HΨ(t) as an initial-value problem by expanding the solution in terms of direct (or Hartree) products Φ J ≡ ϕ j1 ⊗ · · · ⊗ ϕ jN : The unknown single-particle functions ϕ j (j = 1, . . . , n) are in turn represented in a fixed basis of, in our case, harmonicoscillator orbitals. The permutation symmetry of Ψ is ensured by the correct symmetrization of the expansion coefficients A J .
Note that, in the above expansion, not only the coefficients A J but also the single-particle functions ϕ j are time dependent. Using the Dirac-Frenkel variational principle, one can derive equations of motion for both A J , ϕ j [25]. Integrating this differential-equation system allows us to obtain the time evolution of the system via (1). This has the advantage that the basis set {Φ J (t)} is variationally optimal at each time t; thus it can be kept relatively small.
Although designed for time-dependent studies, it is also possible to apply this approach to stationary states. This is done via the so-called relaxation method [31]. The key idea is to propagate some wave function Ψ(0) by the non-unitary e −Hτ (propagation in imaginary time.) As τ → ∞, this exponentially damps out any contribution but that stemming from the true ground state like e −(Em−E0)τ . In practice, one relies on a more sophisticated scheme termed improved relaxation [32,33], which is much more robust especially for excitations. Here Ψ|H|Ψ is minimized with respect to both the coefficients A J and the orbitals ϕ j . The effective eigenvalue problems thus obtained are then solved iteratively by first solving for A J with fixed orbitals and then 'optimizing' ϕ j by propagating them in imaginary time over a short period. That cycle will then be repeated.
III. BOSE-FERMI MAP FOR ATTRACTIVE BOSONS
In this section, we state the general Bose-Fermi map and discuss its application to infinitely attractive interactions. (Without loss of generality, we focus on the time-independent formulation.) Its intuitive meaning will be illustrated on the special example of two harmonically trapped bosons.
A. General map
The Schrödinger equation of N bosons with point interactions, (E − H)Ψ = 0, is equivalent to a noninteracting system where r ≡ x i − x j for fixed i = j. For infinitely repulsive interactions, g → ∞, the constraint that leads to the well-known hard-core boundary conditions Since, otherwise, Ψ fulfills the noninteracting Schrödinger equation, it is intelligible that it can be mapped to a noninteracting state Ψ − of identical fermions, which automatically satisfies the hard-core boundary condition (3) by Pauli's exclusion principle [6]: The Bose-Fermi map A serves only to restore bosonic permutation symmetry. Note that, since A 2 = 1, all local quantities derived from ρ N = |Ψ| 2 will coincide with those computed from the fermion state. In this sense, the case of infinite repulsion is commonly referred to as fermionization limit ("Tonks gas").
By contrast, the constraint for infinite attraction then we recover the hard-core gas above. Consequently, the Bose-Fermi mapping Ψ = AΨ − then holds for g → −∞ as well. In particular, the energetically lowest such state exactly equals the fermionized repulsive ground state -the Tonks gas. However, for strong attraction, this eigenstate will be highly excited, whereas the ground state will be strongly bound. For finite g > −∞, this may be identified with the super-Tonks state in Ref. [20]. A few comments are in order: (i) By construction, this holds for any external potential U , just like the standard Bose-Fermi map. In particular, the analytic solution for N bosons in a harmonic trap carries right over [34] Ψ (X) ∝ e −|X| 2 /2 and likewise for the homogeneous system [6].
(ii) By the same logic as above, this extends to binary mixtures of bosons (for the repulsive case, cf. [35]). Likewise, the generalized Bose-Fermi map for spinor bosons [36] ought to apply also to the limit of infinite attraction.
(iii) The map also holds in the presence of additional longrange interactions, such as in dipolar gases [37].
B. Illustration
To visualize the above argument, let us resort to the simple model of two bosons in a harmonic-oscillator (HO) potential, Here the center of mass (CM) R = 1 N i x i and the relative coordinate r = x 1 − x 2 separate, One can therefore write the wave function and its energy as where φ N is the HO orbital with quantum number N = 0, 1, . . . . The relative Hamiltonian may be viewed as a harmonic potential split into halves in the center, i.e., at the point of collision r = 0. There the delta function imposes the boundary condition (2), which amounts to a cusp for g < 0.
In the limit g → −∞, when the support of ψ becomes much smaller than the oscillator length, this is described simply by the bound state of the delta potential, ψ(r) ∼ 1 √ a e −|r|/a with a ≡ −2/g the 1D scattering length. Clearly |ψ(r)| 2 g→−∞ → δ(r).
IV. MOLECULE FORMATION VS. FERMIONIZATION IN A HARMONIC TRAP
After having presented the general mapping valid for infinite attraction, let us now investigate the crossover to that borderline case, starting from the noninteracting states. For concreteness, we shall focus on the case of N = 3 bosons in a harmonic trap. Figure 2 explores the evolution of the low-lying few-body spectrum {E m (g)} as we vary g ≤ 0. In the absence of interactions, the spectrum exhibits an even spacing, which comes about by distributing all N particles in number states |N = |N 0 , N 1 , . . . over the lowest HO levels ǫ a = a + 1 2 .
Clearly, Fig. 2 reveals that, when attractive interactions g < 0 are switched on, the depicted spectrum falls apart into three (for N = 3) qualitatively different subclasses. In anticipation of our analysis below, we identify the asymptotically lowest set of levels as N -body bound states (or trimers), the cluster on top of these as hybrid states (dimers plus one free boson), and the highest level as gas-like state which undergoes fermionization. Note that, by CM separation, for each state there is a countable set of copies with different CM energies N + 1 2 , which are shifted with respect to one another.
A. Trimer states
The ground state, as well as its copies with higher CM excitations, can be thought of as an N -body bound state, which would be the straightforward generalization of the two-body bound state discussed in Sec. III B. In fact, this class of states is well known from the homogeneous system [19], where an analytic solution is available via Bethe's ansatz [9,23] where a ≡ −2/g is the 1D scattering length. We will now argue that, for sufficient attraction, this wave function also holds in our case of harmonic confinement, up to a trivial CM factor φ N . To this end, let us proceed like in [19] and transform X ≡ (x 1 , . . . , x N ) ⊤ to Jacobian coordinates Y ≡ (R, r 1 , , . . . , r N −1 ) ⊤ , here for simplicity specified for N = 3: Up to a factor, R (r 1 ) coincide with the usual center of mass (two-particle relative coordinate), while r 2 gives the difference between particle #3 and the center of mass of the cluster (1, 2). By orthogonality of O, the Hamiltonian transforms to H(Y ) = h CM + H rel , with If all N particles cling together to form a tightly bound state, then their distances will be small compared to the confinement scale, |r k | ≪ 1. In this limit, 1 2 r 2 k may be safely neglected, so that the relative wave function asymptotically maps to the homogeneous form (7). Likewise, the energy scales as E(g) ∼ N + N 2 − α N g 2 (α N > 0). A look at the one-body density profiles ρ(x) = x|ρ 1 |x shown in Fig. 3(a) confirms that this state becomes more and more localized. In contrast to the translation-invariant homogeneous case, this "soliton"-like state is localized even in the one-body density, which represents an average over all measurements. Moreover, in contrast to the hard-wall trap [10], there is no tipping point where the width (∆x) 2 = x 2 would become larger again and return to its noninteracting value for g → −∞. In fact, here ∆x decreases monotonically for g → −∞ and eventually saturates in the CM density whose length scale a CM = 1/ √ N is suppressed by the total mass N M (in units of the atomic mass M ≡ 1). To prove this, note that by CM separation, For increasing attraction, lim g→−∞ ρ rel (r) = δ(r), in agreement with (7). Fig. 4 visualizes that the two-body density ρ 2 (x 1 , x 2 ) = x 1 , x 2 |ρ 2 |x 1 , x 2 is more and more concentrated on the diagonal {x 1 = x 2 }. Carrying out the trivial integrals proves Eq. (9), which reflects that all atoms clump together to point-like molecule whose position coincides with the CM. (Of course, the validity of our effective model requires the molecule to be still large compared to the spatial extension of the atoms.) That line of reasoning fails in the hard-wall trap, where R and r couple strongly due to the anharmonicity of U (x) [40]; in fact, for very tight binding of two particles, their common CM would be permitted to spread out over the whole box, thus compensating the stronger localization in the relative coordinate. A simple dimensional argument to see this is that the length scale in a hard-wall trap is simply the size of the box, independent of the object's mass -in contrast to the harmonic oscillator.
B. Hybrid states
The behavior of the class of levels on top of the trimer levels (N = 3, Fig. 2) is clearly more complex, which indicates that different CM excitations N are involved. On the one hand, the levels are significantly above those of the N -body bound states; on the other hand, they do not saturate with increasing attraction. This suggests to identify these threeboson states with the formation of dimers plus one unbound atom. For general N , this class of hybrid states involves different fragments-labeled f = 1, . . . , F -of N f -body bound states, where F f =1 N f = N and F = 2, . . . , N − 1. Figure 3(b) shows the density profile of the lowest hybrid state, which has N = 0. A pronounced peak at x = 0 builds up similar to the trimer case, but with a non-Gaussian structure indicative of relative excitations. This becomes clearer from a look at the two-body density in Fig. 4: For increasing attraction, part of the bosons clump together, as is visible from the "molecule" peak at {x 1 = x 2 }. However, unlike before, a non-negligible part remains isolated at {x 1 = −x 2 }. To understand this pattern a little better, let us revisit the Hamiltonian (8). If we assume that only two of the three bosons bind, say |r 1 | ≪ 1 (up to permutation symmetry S + ), then we end up with two decoupled Hamiltonians, one for a free-space molecule (r 1 , relating to the relative ground state described in Fig. 1a) and one for an effective particle in an excited state of an oscillator with a delta-type dimple at the origin (r 2 , the lowest excitation corresponding to Fig. 1b). This yields the relative wave function (excluding the trivial CM factor) ψ(X) ∝ S + e −|x1−x2|/a U −ǫ, where the parameters are determined in analogy to Eq. (5).
We have checked that this expression qualitatively reproduces the two-body pattern in Fig. 4 for g → −∞. That makes it tempting to think of the hybrid state as a hard-core "gas" of a dimer-clumped near the trap center-and a third, unbound boson. In that regime, the energy scales as E(g) ∼ N + In general, there can be different combinations of relative and CM excitations (N = 0) which give nearly the same energy -this explains the splittings of all but the lowest hybrid levels in Fig. 2.
C. Fermionizing states
Let us now focus on the highest level in Fig. 2, which is the energetically lowest gas-like state. Its energy does not diverge quadratically with g → −∞, but rather saturates. By the fermionization map above, its limit is simply the energy of N free fermions, lim g→−∞ E(g) = N −1 a=0 ǫ a = N 2 2 (see Fig. 2, inset), and likewise for higher excitations. Evidently, this requires a huge energy for the connecting level E(g = 0) > E(−∞) if N ≫ 1. In fact, the difference between the two equals that between the noninteracting ground state N ǫ 0 and its fermionization limit N −1 a=0 ǫ a , which can be written down explicitly in a harmonic trap, This can be thought of as increasing (lowering) the energy by ∆ǫ = 1 for each pair (i < j). Therefore, the "super-Tonks" level connects to the noninteracting level E(0) = , as may be verified for N ≤ 4 in Fig. 2(inset).
Accordingly, the corresponding many-body state is expected to evolve to (4). A glimpse of this can be gotten from the density profile shown in Fig. 3(c), where the N density wiggles characteristic of the Tonks gas emerge, ρ = N −1 a=0 |φ a | 2 (a similar observation has been stated in Ref. [21]). In the repulsive case, this has the familiar interpretation that the N bosons localize on N more or less "discrete" spots due to a trade-off between mutual isolation and external confinement [13,14]. Some more insight into this crossover is given by the two-body correlation function ρ 2 (x 1 , x 2 ) (Fig. 4).
As expected from the two-atom toy model (Sec. III B), the diagonal {x 1 = x 2 } "damps out" more and more. The fact that it persists even for couplings as large as g = −10 underscores the notion of the (finite-g) super-Tonks gas being more strongly correlated than its repulsive counterpart [20]. This in turn relates to the picture that, due to a positive 1D scattering length a = −2/g, a small region is excluded from the scattering zone, so that the hard core effectively extends to a nonzero volume. This also offers an explanation for another phenomenon: As g → −∞, we see that the typical fermionized checkerboard pattern forms in the two-body density [15,34]. This signifies that, upon measuring a first boson at, say, x 1 ≈ 0, the remaining N − 1 = 2 bosons are pinpointed to discrete positions x 2 ≈ ±1.5. However, here the peaks are much more pronounced than in the Tonks gas, which may be accounted for by a "thicker" hard core between the atoms.
We have so far looked into local observables, where the fermionization limit imprints truly fermionic properties on the bosons. By contrast, nonlocal features as evidenced, e.g., in the experimentally relevant momentum distributioñ reflect nontrivial differences from the ideal Fermi gas. For the ground state, where all bosons simply form an ever tighter N -body molecule as g → −∞, ρ 1 (x, x ′ ) ∝ δ(x − x ′ ) loses all long-range order, i.e., ρ 1 (x, x ′ ) = 0 for |x − x ′ | > 0. By complementarity, its momentum spectrum trivially approaches a flat shape. Things are more complicated for the hybrid state in Fig. 5(b): Since only two bosons bind and, as a whole, form a hard-core composite with the remaining atom, some long-range order is preserved, so that the central peak ρ(0) persists even for large values of |g|. Note that, as in the repulsive case, the hard-core short-range correlations enforce an algebraic decay for high momenta,ρ(k) ∼ c/k 4 [41]. Finally, the gas-like state exhibits the most interesting behavior (Fig. 5c). The initially box-like distributionρ(k) = 2πρ(k) (harmonic trap at g = 0) forms a strong k = 0 peak highly reminiscent of the Tonks gas [13,15]. Also note the slow formation of the characteristic k −4 tails. This complements our picture of the crossover from zero to strongly attractive interactions.
V. SUMMARY
In this work, we have brought together the subjects of attractive, one-dimensional Bose gases-which currently are of great interest and experimentally relevant-and the binding properties of few-body systems. We have studied the stationary states of one-dimensional bosons in a harmonic trap throughout the crossover from weakly to strongly attractive interactions.
Three different classes of states have emerged for strong enough attraction: (i) The ground state and its center-of-mass excitations become N -body bound states, for which any two particles pair up to a tightly bound molecule. Its binding length a shrinks to zero with increasing attraction, and thus the relative motion becomes independent of the trap geometry. (ii) By contrast, certain highly excited states fermionize, i.e., they map to an ideal Fermi state for infinite attraction. Both the typical fermionic density profile with N maxima and, more generally, the characteristic checkerboard pattern in the two-body density have been evidenced, signifying localization of the individual atoms. Also, the formation of a hardcore momentum distribution has been witnessed, with a zeromomentum peak and an algebraic decay for large momenta. (iii) Between these two extremes, there is a rich class of hybrid states featuring mixed molecule and hard-core boundary conditions. For the special case of N = 3 atoms, this class consists of a dimer plus a single boson, with a hard-core separation between the two.
Even though we have focused mostly on few atoms (N = 3) in a harmonic trap, these results reflect the microscopic mechanism for arbitrary atom numbers and external potentials. | 5,415.2 | 2008-06-05T00:00:00.000 | [
"Physics"
] |
Identification of PKDL, a Novel Polycystic Kidney Disease 2-Like Gene Whose Murine Homologue Is Deleted in Mice with Kidney and Retinal Defects*
Polycystin-1 and polycystin-2 are the products ofPKD1 and PKD2, genes that are mutated in most cases of autosomal dominant polycystic kidney disease. Polycystin-2 shares ∼46% homology with pore-forming domains of a number of cation channels. It has been suggested that polycystin-2 may function as a subunit of an ion channel whose activity is regulated by polycystin-1. Here we report the identification of a human gene, PKDL, which encodes a new member of the polycystin protein family designated polycystin-L. Polycystin-L has 50% amino acid sequence identity and 71% homology to polycystin-2 and has striking sequence and structural resemblance to the pore-forming α1 subunits of Ca2+channels, suggesting that polycystin-L may function as a subunit of an ion channel. The full-length transcript of PKDL is expressed at high levels in fetal tissues, including kidney and liver, and down-regulated in adult tissues. PKDL was assigned to 10q24 by fluorescence in situ hybridization and is linked to D10S603 by radiation hybrid mapping. There is no evidence of linkage to PKDL in six ADPKD families that are unlinked toPKD1 or PKD2. The mouse homologue ofPKDL is deleted in Krd mice, a deletion mutant with defects in the kidney and eye. We propose that PKDL is an excellent candidate for as yet unmapped cystic diseases in man and animals.
Polycystin-1 and polycystin-2 are the respective gene products of PKD1 and PKD2, mutations in which account for ϳ95% of cases of ADPKD. 1 ADPKD affects up to 1/1,000 individuals and is associated with a 50% incidence of end-stage renal failure by the sixth decade of life (1). At least one additional gene is known to be mutated in the ADPKD population (2,3) but has yet to be identified.
Polycystin-1 encodes a 4,303-amino acid plasma membrane protein with a large extracellular N-terminal domain that contains leucine-rich repeats, a C-type lectin domain, and an LDL-A-like domain, all three of which are involved in cell-cell or cell-matrix interactions in other proteins (4 -6). These domains are followed by 16 repeats of the so-called PKD domain and by an REJ (receptor for egg jelly in sea urchin sperm)-like domain.
The predicted amino acid sequence of the PKD2 gene is homologous to the C terminus of polycystin-1 (9,10). Polycystin-2 is a 968-amino acid protein with ϳ46% sequence similarity to each domain of the pore-forming ␣1 subunits of Ca 2ϩ and other cation channels, and like these channel subunits, it is predicted to have six transmembrane domains. Polycystin-2 has a putative Ca 2ϩ binding structure (EF-hand) in its Cterminal cytoplasmic domain. It interacts biochemically with polycystin-1 and with itself (7,8).
Here we report the identification, chromosomal localization, and expression of a third gene encoding a protein of the polycystin family. The product of this gene is an excellent candidate for a component of the pore-forming subunit of a polycystinrelated channel and is also a candidate for various human and murine cystic diseases.
EXPERIMENTAL PROCEDURES
Isolation of PKDL cDNAs-Overlapping EST sequences W27963 and W28231, derived from a retina cDNA library, were identified by their gene product homology to polycystin-2 (gb 189 U50928). A 340-base pair fragment in the overlap region of both ESTs was amplified from human adult kidney and brain poly(A)-selected RNA by reverse transcription-PCR using primers 5Ј-TCTTCGTGCTCCTGAACATG-3Ј and 5Ј-CCT-GTCGCATTTTTCCTGTT-3Ј. 5Ј-and 3Ј-rapid amplification of cDNA ends were performed with human skeletal muscle and kidney rapid amplification of cDNA ends kits (CLONTECH, Palo Alto, CA), respectively. Primers were designed based on PKDL reverse transcription-PCR products. Nested amplification was performed following manufacture's instructions. The 5Ј-rapid amplification of cDNA ends product was random-labeled with [ 32 P]dCTP and used to screen a human retina cDNA library (CLONTECH). Hybridization was performed in a buffer containing 5 ϫ SSC (1ϫ SSC, 0.15 M NaCl and 0.015 M sodium citrate), 50% formamide, 1% SDS, and 5ϫ Denhardt's solution at 42°C, overnight. Filters were washed three times in buffer (0.1ϫ SSC and 0.1% SDS) at 65°C. Positive signals were purified, and inserts were subcloned into pBluescript II (Stratagene, La Jolla, CA) and sequenced.
Sequence Analysis-Clones were sequenced from both strands, and the sequences were aligned to give an overall consensus sequence. The computer program MOTIFS of GCG (11) was used to identify putative glycosylation and phosphorylation sites (20). Prediction of coiled-coil structure by Lupas' algorithm (12) was performed with the COILS computer program with and without 2.5-fold weighting of positions a and d, whereas prediction by Berger's algorithm (13) was performed with Paircoil program. Kyte and Doolittle's hydropathy analysis (14) was performed using PEPPLOT (GCG). Secondary structure analysis was performed with PEPTIDESTRUCTURE (GCG) computer program and with the protein sequence analysis software using type-2 discrete state-space models (15,16). Analysis of transmembrane segments was performed with the TMpred computer program (17) using windows of 18 -33 residues and with SOSUI 2 and TMAP (19). Alignments with homologous sequences from polycystin-1, polycystin-2, and Ca 2ϩ channel ␣1 subunits were performed with LINEUP and BESTFIT (GCG) and optimized by visual comparison.
RNA Dot Blot and Northern Hybridization-Human adult and fetal RNA blots (CLONTECH) were hybridized with a randomly labeled 5Ј-most 1.5 kb of the coding sequence of PKDL in 5 ϫ SSC, 50% formamide, 1% SDS, 5ϫ Denhardt's solution at 42°C overnight. Filters were washed twice in 2 ϫ SSC, 0.1% SDS at room temperature and at 50°C. Signals were visualized by autoradiography.
Fluorescent in Situ Hybridization-A 1.7-kb human PKDL genomic fragment between cDNA positions 2,006 and 2,206 was PCR-amplified with exonic primers and subcloned into pCRII (Invitrogen, Carlsbad, CA). One g of this vector was labeled with digoxigenin-11-dUTP as described previously (20), coprecipitated with 10 g of Cot-1 DNA, and resuspended in 1ϫ Tris-EDTA at 200 g/ml. Hybridization of metaphase chromosome preparations from peripheral blood lymphocytes of normal human males was performed with PKDL at 10 g/ml in Hybrisol VI as described previously (21). Digoxigenin-labeled probe was detected using reagents supplied in the Oncor Kit (Oncor, Gaithersburg, MD) according to the manufacture's recommendations. Metaphase chromosomes were counterstained with 4,6-diamidino-2-phenylindole dihydrochloride. Map position of PKDL was determined by visual inspection of the fluorescent signal on the 4,6-diamidino-2-phenylindole dihydrochloride-stained metaphase chromosomes using a Zeiss Axiophot microscope. Images were captured and printed using the CytoVision Imaging System (Applied Imaging, Pittsburgh, PA). Twenty-one metaphases were assessed for probe localization.
Radiation Hybrid Mapping-An intron between cDNA positions 2,042 and 2,043 was amplified with exonic primers and sequenced. A set of primers was designed to amplify part of this intron. The Stanford G3 panel (22) was screened by PCR with this primer set. Data was processed at the Stanford Human Genome Center RH server.
Linkage Analysis-Two families, one Italian and one Spanish (F431 and F432) have been previously described (3,23,24), Four other families were studied: TOR1, a three-generation pedigree with 42 members of whom 26 were affected; TOR2, a two-generation pedigree with 8 members of whom 4 are affected 3 ; and Singa 1, a two-generation pedigree with 5 members of whom 4 are affected, and Bulga 1, a 3-generation pedigree with 9 members of whom 5 are affected (24).
Généthon polymorphic marker D10S603, which has the same distribution as PKDL by radiation hybrid mapping and two flanking markers (D10S198 -1.2 cM-D10S603-0.2 cM-D10S192) were selected to test for linkage to ADPKD in six families previously shown to be unlinked to the PKD1 and PKD2 loci. Genomic DNA from members of these families were used as templates for PCR. [ 32 P]dCTP-labeled PCR products were separated by polyacrylamide gel electrophoresis. Pairwise affected-only linkage analysis was performed using the FASTLINK suite of programs. A fully penetrant dominant model with a disease gene frequency of 0.0001 and equal allele frequencies was assumed. The data was calculated using two-point lod scores.
RESULTS
Through data base searches we identified two EST sequences of ϳ500 nucleotides, W27963 and W28231, with similarity to polycystin-2. The deduced amino acid sequences of W27963 and W28231 showed 78% homology and 56% identity (over residues 649 to 749) and 65% homology and 39% identity (over residues 678 to 786 with a single three-residue gap) to polycystin-2, respectively. The two EST sequences shared 94% identity over 421 base pairs. We tentatively concluded that these ESTs arose from the same gene.
Using primer sets based on these overlapping EST sequences, we amplified the same reverse transcription-PCR product from adult kidney and brain RNA whose translated amino acid sequence shows 67% homology and 46% identity to residues 670 to 779 of human polycystin-2. We further performed 5Ј-and 3Ј-rapid amplification of cDNA ends with skeletal muscle and kidney poly(A) RNA, respectively, and obtained 0.8 kb (5MR1) and 0.9 kb cDNAs (3MR20), respectively. Using 5MR1 as a probe, we screened a human retina library. Three clones, PKDL-6, PKDL-7 and PKDL-8, were obtained and sequenced. The consensus 3,044-base pair sequence revealed an open reading frame of 2,415 base pairs, which encodes a protein of 805 amino acids (Fig. 1). The putative translation start site at cDNA position 384 (5Ј-TTCCCCATGA-3Ј) is not accompanied by a typical Kozak sequence. A single inframe stop codon is found in the putative 5Ј-untranslated region. The open reading frame is followed by several in-frame stop codons, and the 3Ј-untranslated region contains a consensus polyadenylation signal (5Ј-AATAAA-3Ј) 10 nucleotides upstream from the poly(A) tail.
The deduced amino acid sequence of PKDL is shown in Fig. 1. Hydropathy analysis of the polycystin-L sequence showed five highly hydrophobic regions predicted to be transmembrane segments ( Fig. 2A). Three additional relatively hydrophobic peaks were identified. Polycystin-L showed significant homology to polycystin-2 as expected (71% homologous, 50% identical). This homology is generally higher in predicted transmembrane segments and in the loops between transmembrane segments (Fig. 2B). Polycystin-L also showed a moderate similarity (similarity 45%, identity 22%) to polycystin-1 over residues 1 to 797. This similarity is slightly higher in transmembrane segments, but there is one conserved positively charged short amino acid stretch in the first loop between transmembrane segments (Fig. 2B).
Polycystin-L, like polycystin-2, shows homology (similarity ϳ47%, identity ϳ21% overall) to each of the four domains of various Ca 2ϩ channel ␣1 subunits and other cation channels. Regions of homology are clustered in the last four transmembrane segments and the pore region of each domain of the Ca 2ϩ channel ␣1 subunits (Fig. 2B). In polycystin-L and polycystin-2, the regions corresponding to this pore region include the last of the three relatively hydrophobic peaks. The first two-thirds of this region is predicted to form a helical structure, which is characteristic for various cation channels.
Two algorithms (12,13) predict that polycystin-L has a coiled-coil domain in its C-terminal cytoplasmic tail (Fig. 1). Polycystin-L also has a putative Ca 2ϩ binding structure or EF-hand (25) that generally consists of two helices and a loop between them (Fig. 1). The C-terminal helix in the EF-hand of polycystin-L overlaps with the predicted coiled-coil region. Polycystin-L has a putative cAMP phosphorylation site in its C terminus. Putative protein kinase C phosphorylation sites are all in regions predicted to be cytoplasmic. Four of five putative casein kinase II phosphorylation sites with strong motif sequences (positions 249, 563, 674, 703, 719) are also found in the C-terminal cytoplasmic domain (Fig. 1). 2 Multiple tissue RNA dot blot analysis using the 5Ј 1.5-kb of the coding sequence as a probe revealed highest expression in adult heart and kidney (Fig. 3A). Northern blot analysis showed the presence of 5-and 1.5-kb bands in fetal tissues including kidney, liver, and brain (Fig. 3, B and C). This result suggests the presence of alternatively spliced forms. The abundance of the two splice variants is ϳ1:1 in fetal tissues. In adult tissues, however, the long transcript is only detected after prolonged autoradiography.
Chromosomal assignment of PKDL to 10q24 was achieved by fluorescent in situ hybridization on 4,6-diamidino-2-phenylindole-dihydrochloride-stained metaphase human chromosomes using a 1.7-kb genomic probe (Fig. 4). In 17 of 21 metaphase preparations analyzed, a hybridization signal was found to be present on the long arm of chromosome 10 in band q24. In six spreads, both copies of chromosome 10 were labeled, and in 11 metaphase spreads, a signal was detected on a single chromosome 10. No signals were observed on other chromosomes. With the Stanford G3 radiation hybrid panel, PKDL was found to have an identical distribution pattern as polymorphic marker D10S603 (lod score greater than 1,000). Linkage analysis of the PKDL locus using flanking markers D10S603, D10S198, and D10S192 gave negative lod scores in six ADPKD families previously documented to be unlinked to PKD1 and PKD2 loci ( Table I).
The human PKDL gene is located within a linkage group that is conserved on the distal portion of mouse chromosome 19 (26) (Fig. 5A). A 7-centimorgan deletion of this region has been described in Krd mice (27). To determine whether the mouse homologue, Pkdl, is located within the Krd deletion, we analyzed genomic DNA from F1 animals obtained from a cross of strain C57BL/6J-Krd with strain SPRET/Ei. To detect the C57BL/6J-Krd-derived allele in the F1 DNA, we utilized restriction fragment-length polymorphisms detected by hybridization with a 1.5-kb human PKDL cDNA probe (Fig. 5B). Strain C57BL/6J contains three hybridizing TaqI fragments of 5.5, 5.0, and 1.8 kb. Strain C3H, on which the Krd mutation originally arose, contains three hybridizing TaqI fragments of 5.5, 5.0, and 1.6 kb. Strain SPRET/Ei contains two hybridizing fragments of 8 and 4.5 kb. The (C57BL/6J-Krd X SPRET/Ei) F1 mouse inherited the hybridizing fragments contributed by the SPRET/Ei parent but did not inherit the fragments from C57BL/6J or C3H (Fig. 5B). This result indicates that the mouse Pkdl locus is located within the region that is deleted by the Krd mutation. DISCUSSION A Novel PKD2-like Gene-The manifestations of PKD1-and PKD2-linked ADPKD are generally similar, raising the likelihood that the gene products function in the same or parallel biological pathways. Homology between polycystin-2 and the pore-forming ␣1 subunits of voltage-activated Ca 2ϩ and Na ϩ channel proteins, combined with evidence of interaction between polycystin-1 and polycystin-2 (7,8), has led to the proposal that polycystin-2 forms homo-or heteromultimeric complexes with itself, with polycystin-1, or with another protein to function as an ion channel (9). Inasmuch as a small fraction of ADPKD families are not accounted for by PKD1 and PKD2 mutations and the function of polycystin family members may be cooperative, we postulated the existence of additional polycystin family members.
Here we report the identification and cloning of a third gene encoding a member of the polycystin superfamily, polycystin-L. Its gene, PKDL, is therefore an excellent candidate gene for human and murine cystic diseases. The temporal expression pattern of PKDL is similar to that of PKD1.
Sequence Analysis: Implications for Polycystin Function-The hydropathy patterns of polycystin-L and polycystin-2 are similar except in the region corresponding to the S4 segment of polycystin-2 where polycystin-L has a much lower hydrophobicity score, suggesting that this is a secondary membranespanning region. Polycystin-L and polycystin-2 both have putative EF-hand structures in their C-terminal cytoplasmic domains, suggesting that their functions are influenced by cytoplasmic Ca 2ϩ concentration. In several Ca 2ϩ channels, binding of Ca 2ϩ to EF-hand structures inactivates the channels (28).
Polycystin-L and polycystin-2 show moderate but significant sequence similarity to Ca 2ϩ and other cation channels, especially within their S3-S6 segments and the loop between the S5 and S6 segments. In addition, the last two membrane-spanning segments of polycystin-L, polycystin-2, and Ca 2ϩ channel ␣1 subunits share structural characteristics with the Streptomyces lividans K ϩ channel (KcsA) whose structure has been determined by crystallography (29). The common structural features include: lining residues of the last membrane-spanning segments that are mostly hydrophobic except for the negatively charged acidic amino acid near the end of these segments; loops between the last two membrane-spanning regions (pore region) that are mildly hydrophobic (Fig. 2A); first 2 ⁄3 of pore regions that are predicted to form short helical structures (pore-helix); and finally, last 1 ⁄3 of pore regions that begins with negatively charged residues which have been considered to determine the selectivity to Ca 2ϩ in known Ca 2ϩ channels (30).
Polycystin-L differs from polycystin-2 most significantly in the N-terminal cytoplasmic domain where it lacks a 100-amino acid segment. In the C-terminal cytoplasmic domain, polycystin-L is strongly predicted to have a coiled-coil structure, which has the potential to tightly interact with molecules with a similar structure like polycystin-1. Lupas' algorithm (12) also predicts a coiled-coil structure in polycystin-2, but it is not supported by Berger's algorithm (13).
Polycystin-L and polycystin-2 have three positively charged residues in S4 as opposed to five to eight in voltage-gated channels. Whereas the S4 region in voltage-gated Ca 2ϩ channels is considered to be a voltage sensor (31), it is not clear whether a membrane-spanning region with only three basic residues could act as a voltage sensor. Polycystin-L also has several putative phosphorylation sites: one cyclic nucleotide, two protein kinase C, and four casein kinase II phosphorylation FIG. 2. A, hydropathy analysis of polycystin-L (Pc-L) and polycystin-2 (Pc-2). Hydrophobic peaks that are considered to be primary membranespanning regions are described as S1, S2, S3, S5, and S6. Mild hydrophobic peaks indicating secondary transmembrane domains are labeled S1/2, S4, and p. a.a., amino acids. B, alignment of polycystin-L with polycystin-2 (gb 189 U50928), polycystin-1 (Pc-1) (gb 189 U24497), voltage-activated Ca 2ϩ channel ␣1G, ␣1C, and ␣1E (31), and transient receptor potential related channel 3, trpc3 (EMBL 189 Y13758). Roman numbers indicate domains of voltage-activated Ca 2ϩ channel ␣1 subunits. Positively charged residues in polycystin-shared motif and S4 segment are marked with a plus sign; negatively charged residues in pore-loop and S6 segment are marked with a minus sign.
sites with strong motif sequences in the C-terminal cytoplasmic domain. Two other putative protein kinase C phosphorylation sites are also found in the N-terminal cytoplasmic domain. Phosphorylation of these motif sequences may be involved in the gating process of the channel. Another scenario is that the channel is gated by a direct or indirect signal from associating proteins, e.g. polycystin-1. Given that polycystin-1 has domains that may be involved in cell-cell or cell-matrix interaction and is known to interact with polycystin-2 (7, 8), we hypothesize that the binding of ligand(s) to polycystin-1 may be associated with the gating of a polycystin-related channel.
Sequence analysis and comparison to other channels support the six or seven membrane-spanning plus one pore-region topology of polycystin-2 and polycystin-L. In addition to the five putative transmembrane segments, the middle of the three relatively hydrophobic peaks, which corresponds to S4 in ␣1 subunits of cation channels, is likely to be another transmembrane segment. Whether the N-terminal peak (S1/2) forms a membrane-spanning region is not clear.
One common feature of the polycystin-L/polycystin-2 structure that is rarely observed in known ion channels is that they both have relatively long extracellular loops between the first and the second putative transmembrane segments. Although this loop region does not show high homology to any known ion channels, polycystin-2 and polycystin-L maintain a high level of homology with each other in this region. Moreover, this region contains a 13-amino acid stretch with 3 to 4 basic residues that is conserved not only between polycystin-2 and polycystin-L but also with polycystin-1. The function of this polycystin-shared motif is not clear.
Chromosomal Assignment and Linkage Studies-Studies using D10S603, which maps to the same interval as PKDL by radiation hybrid mapping, and two adjacent markers, D10S192 and D10S198, did not reveal linkage in six non-PKD1, non-PKD2 families, making it unlikely that mutations in PKDL cause the disease in these families. Among other as yet unexplained human cystic kidney diseases, it is unlikely that PKDL plays a role in autosomal recessive polycystic kidney disease, as mutations in most autosomal recessive polycystic kidney disease families have been mapped to chromosome 6 (32). The PKDL locus can, however, be considered as a candidate for unmapped human genetic cystic disorders such as dominantly transmitted glomerulocystic kidney disease of postinfantile onset (33), isolated polycystic liver disease (34), and Hajdu-Cheney syndrome/serpentile fibula syndrome (35,36).
The region syntenic to the human PKDL locus is located on chromosome 19 in mice (26). This region is partially deleted in mice with the mutation Krd (Kidney and retinal defects) (27). The 7 centimorgans Krd deletion is located between Tdt and Cyp17 and includes the paired box gene Pax2. Mice heterozygous for a null mutation of Pax2 frequently demonstrate reduction in kidney weight, which ranges from 10 to 100% normal (37). The reduced size is due mainly to calyceal and proximal ureteral diminution as well as cortical thinning, with a reduced number of developing nephrons (37). In contrast, the phenotype of Krd/ϩ heterozygotes includes aplastic, hypoplastic, and cystic kidneys, as well as reduced viability on strain C57BL/6J (27). Our Southern analysis demonstrates that the mouse ortholog of PKDL is deleted in Krd mice. Further study is needed to clarify the contribution of Pkdl to the Krd phenotype.
Several other congenital murine and rat models with polycystic kidney disease are also known to exist, although the genetic defects in these models are as yet to be identified (38,39). Among mouse PKD models, loci for cpk, bpk, pcy, jck, jcpk, kd have been mapped to mouse chromosomes 12, 10, 9, 11, 10, and 10, respectively, and are unlikely to involve the mouse homologue of PKDL. In Han:SPRD cy/ϩ rat, the disease gene was mapped to rat chromosome 5, whose human syntenic region resides on human chromosome 8 (18). | 4,982.6 | 1998-10-02T00:00:00.000 | [
"Biology"
] |
The Influence of House Prices on Marriage in China
. This Paper studies how house prices influence marriage in China through two perspectives: the age at first marriage and the marriage rate. For the age at first marriage rate, the results indicate that house prices have a significant delay effect on age at first marriage and this effect is also different for males and females, the delay effect will be greater for males. In terms of the marriage rate, house prices have an unexpectedly positive, albeit small, effect on the marriage rate. Examining this effect further by different provinces groups, the house prices have a negative effect on the marriage rate in more developed provinces, while the effect remains positive in developing provinces. Combining these results, this paper clearly shows that house prices are an important factor influencing marriage in China. Hence if the government wants to increase the marriage rate or early marriage age, it would be a good choice to focus on house prices.
Introduction
Marriage is one of the most important factors of social well-being, it is highly correlated with population growth [1]. According to the data announced by the National Bureau of Statistics of China, the natural population growth rate in 2022 dropped to a negative value of -0.6‰, this is the first time China faces a negative growth rate of population in 61 years. Referring to Japan, a country with more than 10 years of negative population growth rate [2]. The long-term population stagnancy would drive the aging problem, decreasing the willingness to invest and eventually lowering the economic growth potential [3]. Decreasing fertility rate, marriage squeeze, higher divorce rate, and later marriage age, all these elements of marriage performed main characters on the negative population growth rate [4]. Many literatures studied about what drove marriage into these conditions, and they provided many explanations: higher education level, unbalanced sex ratio, and inequality in the labor market [5, 6, and 7]. Females with higher education levels are respected to put more effort into work, they are more likely to delay their marriage and have fewer children [5]. The son preference tradition and the 'one child policy' aggravate the male-skewed sex ratio in China, and this phenomenon squeezes the marriage market [6]. The inequality in the labor market will decrease the willingness of females to get married, and also raise the impact of wage change on male marriage possibility [7].
In this article, we will focus on a unique influencing factor in China: the rising house price. The property price-to-income ratio for China is 34, compared with developed countries like US, UK, Canada, and Germany with a value less than 10[8]; this high value indicates buying a house in China needs to work triple times that of these developed countries. Many literatures point out that purchasing a house prior to marriage is a typical Chinese social norm, hence the cost of marriage increases with the increasing house price [9,10]. Young people with no house are discouraged from entering marriage, they will delay their first marriage time and maintain a high saving rate for affording the house. For example, one article using related Chinese data from 2000 through 2005 found that the initial marriage rates declined by 0.31% for a 1% increase in house prices [1]. Another study about the relationship between marriage rate and house price using a threshold regression point out that the marriage rate will decrease with higher house price after the house price exceeds the threshold level [11]. Some scholars are concerned about the implementation of land reform in China after 2002 and find evidence that this policy decreases the probability of marriage by 5.3% [12]. Instead of testing how house price influences marriage, some scholars also discover there is evidence showing the divorce rate can also affect house price [13]. All these literatures provide inspiration about the relationship between marriage and house prices, this article will continue to expand this idea. This paper will illustrate the relationship between first marriage age and house price at the national level and compare the effect between urban households and rural households. Then using the province-specific data to compare the effect between provinces with different economic levels.
Data
The data used for analysis are all panel data collected from the Chinese National Bureau of Statistics, the data are officially displayed, so we are confident in their accuracy. The whole dataset can be separated into two parts: national-level data and provincial-level data. The national-level data were collected for the years 2003-2020; Including average age at first marriage(AFM), average house price(HP), education expenditure of the government, average salary, unemployment rate, consumption level, disposable income per capita, Engel coefficient, the density of population, percentage of tertiary education, sex ratio of age 20-24, and sex ratio of age 25-29. In this part, our main interest is the relationship between the AFM and HP, other variables are selected as control variables related to the AFM. For the AFM we collected data for all populations, as well as the data separated for males and females.
In the second part of analyzing, the paper collected provincial-level data for 31 different provinces. This paper dropped data for some provinces that contain many missing data and kept the other 25 provinces. In the second part, the dependent variable has changed to the marriage rate, which is calculated by the number of people registered for marriage divided by the total population in a year. This change of the dependent variable is not only because the government not providing the AFM at the provincial-level; but also because the paper wants to see more perspective on how HP affects marriage. The AFM and marriage rate are both important factors representing marriage, but they might have different impacts from the changing of HP and provide different perspectives for analysis. The other control variables are education expenditures, average salary, unemployment rate, percentage of tertiary education, and disposable income per capita. Besides analysis using the whole dataset, this paper also separated the data into more developed provinces and relatively poor provinces.
Age at First Marriage -National-level data
In the first part of the estimation, this paper focused on the relationship between age at first marriage and house price using national-level data. From the starting point, this paper runs an OLS model which specified as following: (1) As mentioned in the Data part, these data were collected from 2003 to 2020. Our main interest is the value for β1, which will illustrate how house prices affect the age at first marriage. In the estimation, this paper used ln (HP) instead of HP to avoid heteroskedasticity caused by large differences in the values of variables. The β2-β11 measures how the control variables affect the age at first marriage, the control variables are also stated in the Data part before. Many literatures are concerned with the endogeneity problem between AFM and HP [1,9], so this paper constructs a 2SLS model by using the debt ratio and profitability of the real estate industry as instrumental variables. By performing a Hausman test on the two regression models, the result shows the OLS model is consistent and efficient. Hence this paper selects the OLS model, and both of the results are shown in Table 1.
Focus on the estimation result of the OLS model in Table 1, the coefficient of the HP is significant at the 5% level, it indicates a 100% increase in house price will associate with a 0.71 increase in the age at first marriage. This seems to be a small influence, but if we bring it back to the actual data, we can see it actually has a huge impact. The average house price in 2003 was 2197/m 2 and increased to 9980/m 2 in 2020, there was about a 450% increase in the value, which means it will cause about 3.2 years delay in the average age at first marriage. The coefficients of other control variables are also significant, the result shows the increase in average salary, unemployment rate, consumption level, Engel coefficient, population density, and percentage of tertiary education will also bring up the age of first marriage. However, the negative coefficient for education expense is surprising, it is widely believed that education should raise the age of marriage; and the coefficient on the percentage of tertiary education proved this perspective. This may be because the coefficient on the percentage of tertiary education already captures this delay effect on the marriage age. This paper then constructed the OLS model using the average age at first marriage for males and females separately, the other independent variables are kept the same. Although variables like average salary and percentage of tertiary education might be different for different gender, this should not have much influence on the coefficient of the house price. The comparable results for these two OLS models are shown in Table 2. In estimation results in Table 2 clearly shows the coefficients are quite different between the two genders. The coefficient on house price for males 0.815 is 42% higher than the coefficient for females 0.574. This indicates if the house price increase by 100%, the age at first marriage for male will delay by 0.241 years more than for female. This result is in line with our expectation that in Chinese tradition the bridegroom should provide a house before marriage, so changes in house prices will have a greater impact on the age at first marriage for males. However, this also indicates that the increase in housing prices will lead to an increase in the marriage age gap between males and females, thus further squeezing the marriage market. There is also a very clear difference in the effect of tertiary education percentage in these two models. The coefficient of tertiary education percentage is only significant in the regression using data for females, and the value is almost double the coefficient for males. Higher education will make women more focused on their careers, thus making them enter the marriage market later because the impact of marriage on women's careers is much greater than that of men [5].
Marriage rate -provincial-level data
After confirming the relationship between house prices and age at first marriage, and how the relationship difference between males and females, the task in this part is to figure out the relationship between marriage rate and house price. The 25 provinces data collected for this part contains most of the areas in China, and this paper constructed a random effect model using these data. The panel data containing different provinces will have provinces' individual effects, hence we choose from the random effect model or fixed effect model. This paper tested both models using the Hausman test, and the random effect model will be more appropriate. The fixed effect mode is specified below: (2) The marriage rate is the explanatory variable, the independent variables had explained in the Data part, and the average house price was still taken as the ln value. The estimation result is shown in Table 3. The data are from different geographic areas, so there might be unobserved within clusters correlation in the data, hence this paper also uses the cluster standard error. The result is shown in Table 3. The results obtained from the random effect model do not match the expectations, the coefficient of house price is 0.0027007. Although the value is quite small, it is a positive value. This illustrates the increase in house prices will surprisingly increase the marriage rate. We can interpret the result this way, although the rise in housing prices has delayed people's marriage age, it has increased their marriage rate by a small margin. This might be because when the house price increase, there are a small number of people who can only afford the house price together if they get married. The only two variables that have negative coefficients are the unemployment rate and disposable income per capita, which means an increase in these variables is associated with a decline in the marriage rate. The higher unemployment rate reduces marriage rates is relatively easy to understand and has been mentioned in several literatures [7] . The reason why higher disposable income per capita reduces marriage rates may be because people no longer have to marry for money, but more for love. This paper then uses the same model but separates the dataset into a group of relatively developed provinces and a group of relatively developing provinces. The characteristics of the data for these two groups are quite different, especially since the house price in the developed area is much higher than in the developing area. The comparable results for these groups were shown in Table 4. From the result in table 4, the house price level has a completely different influence in different groups. The increase in house prices would have a negative effect on the marriage rate in developed provinces and a positive effect in developing provinces. There could be many reasons for this result; Firstly, the house prices in developed provinces may be too high and exceed a specific threshold, thus turn to have a negative effect on the marriage rate [11]; Secondly, combined with our previous thoughts, some people will choose to get married to share the burden when house prices are high, but when prices become too high it will also be harder to find someone to share this burden.
Conclusion
This paper applies OLS and random effect models to explore the relationship between house prices and marriage, the age at first marriage and marriage rate are the two factors used to analyze marriage. The results show a significant delay effect on age at first marriage, and this impact on males is 42% higher than on females. From the marriage rate perspective, house prices have a very small positive impact on the marriage rate. The effect is completely different for developed provinces and developing provinces; for the developed provinces, house prices have a negative effect on the marriage rate, and for the developing provinces, the effect remains positive. The findings confirm that house prices have a significant effect on marriage from the perspectives of age at first marriage and marriage rate. Controlling house prices may be one solution for the government to improve the marriage condition. For example, balance the housing supply and demand with more punitive housing speculation policies to stabilize or even lower the house prices. The government can also introduce some marriage purchase benefits like lowering the property tax for married couples. The current work still has many limitations, the dataset covered all the regions in China and was not separated into urban and rural regions. There should be huge differences for every variable in urban and rural regions, and people in these two regions will have different concepts of marriage. A further study can be based on these distinctions between urban and rural regions. | 3,474.8 | 2023-07-10T00:00:00.000 | [
"Economics",
"Sociology"
] |
LINGUISTICS IN APPLIED LINGUISTICS: A HISTORICAL OVERVIEW
This paper looks at some of the underlying reasons which might explain the uncertainty surrounding applied linguistics as an academic enquiry. The opening section traces the emergence of the field through its professional associations and publications and identifies second and foreign language (L2) teaching as its primary activity. The succeeding section examines the extent to which L2 pedagogy, as a branch of applied linguistics, is conceived within a theoretical linguistic framework and how this might have changed during a historical period that gave rise to Chomskyan linguistics and the notion of communicative competence. The concluding remarks offer explanations to account for the persistence of linguistic parameters to define applied linguistics.
INTRODUCTION
Perhaps the most difficult challenge facing the discipline of applied linguistics at the start of the new millennium is to define the ground on which it plies its trade. The field is not infrequently criticised and derided by the parent science, linguistics, which claims authority over academic terrain that applied linguists consider their own. It should be said, however, that applied linguists themselves seem to attract such adversity because of the lack of consensus within their own ranks about what it is they are actually engaged in. The elusiveness of a definition that might string together an academic domain of such wide-ranging diversity, which appears to be in a perennial state of expansion, makes definitions unsafe, not to say time-bound, and in this sense Widdowson's (2000a: 3) likening of the field to "the Holy Roman Empire: a kind of convenient nominal fiction" is uncomfortably close to the truth. Indeed, a geographical metaphor which alludes to the merging together of the numerous scattered principalities of modern-day Germany both freezeframes a discipline in the early stages of its development and embraces the paradox of an enquiry that extends out in all directions without leaving behind the sort of trace that might serve to delimit its investigative boundaries.
There is a sense of inevitability about this state of affairs in that, like all applied sciences, applied linguistics "begins from local and quite practical problems" (Candlin 1988: vii) so that the point of reference is continually changing. If one accepts the notion that practice precedes theory in applied linguistics -and thus by extension determines theory-then one may also appreciate how context, as defined by time and locality, is likely to describe an academic discipline which is characterised by its very dynamism. "Uncertainty", as Widdowson (2000a: 3) suggests, may be one of the reasons why "applied linguistics has flourished" but the central issue at stake in defining the field would seem to be more closely connected with "directionality" (Widdowson 1980: 169). The practice-before-theory paradigm might, for many applied linguists, describe the central plank upon which the discipline is built although the antithesis of this approach, theory-beforepractice, has just as often been used to solve applied linguistic problems (de Beaugrande 1997: 310).
This paper attempts to shed some light on why the field of applied linguistics continues to generate doubt and misgiving as an academic enquiry. The first part looks at the emergence of the discipline through the formation of its associations and publications and identifies the practical area of second and foreign language (L2) teaching as being the principal focus of research activity in the field. The second part examines how theoretical linguistics has come to form a point of departure in defining L2 pedagogy 1 and the extent to which this might have changed specifically with the advent of Chomskyan theory and the subsequent development of the notion of communicative competence.
THE EMERGENCE OF A DISCIPLINE
The development of applied linguistics as an academic enquiry can be traced back to the middle of the last century with the emergence of a number of research institutions and university departments. Examples of these include TONY HARRIS 1. The enormous impact of other academic disciplines on L2 pedagogy, notably second language acquisition (SLA), is fully acknowledged. Nevertheless, the complexity of the field of SLA might be dealt with more profitably in a separate article, and will not, therefore, be considered here. The term L2 pedagogy (and indeed, L2 teaching) is used in its broadest sense to include learning as well as teaching in a classroom context. Indeed, the two-item definition of applied linguistics offered by BAAL (1994) includes "an approach to understanding language issues in the real world, drawing on theory and empirical analysis." AAAL (2002) is more specific about the nature of its work in listing one of its primary activities as "to network with Teachers of English for Speakers of Other Languages (TESOL)." Curiously, though AILA came into existence with a mandate "to encourage the spread and improvement of language teaching on an international scale" (Strevens 1966: 63), its statutes and bylaws appear to reduce the practical application of its research to a list of twenty areas which includes "first and second language education" (AILA 2002). Thus, the field of language teaching, which is clearly seen as a principal application of applied linguistics in AAAL, is not accorded the same importance in current AILA and BAAL documents.
Finally, all three associations supply lengthy lists of subdisciplines, topics and scientific commissions which underline the "multidisciplinary" and "interdisciplinary" nature of applied linguistics. And although the actual names of these research areas may vary from list to list and from association to association, the ground covered by each one describes what is essentially the same academic terrain.
This threefold pattern of i) linguistics; ii) the practical focus of the field; and iii) its multidisciplinariness is one which was confirmed in the debate about the scope of applied linguistics that took place at the 1999 AILA Congress in Tokyo. The discussions were, according to Grabe and Kaplan (2000: 4-5), characterised by the disaccord between participants although eight key points were eventually drawn up representing those which "most applied linguists would agree on." These may be reduced to the three elements outlined above without any loss of overall meaning since six of the points are basically refinements pertaining to the field's multifariousness which, evidently, was the contentious part of the debate.
APPLIED LINGUISTICS AND L2 PEDAGOGY
One might imagine that the scope of applied linguistics as defined by some of its most prestigious international associations would be mirrored in definitions proffered by its leading publications. Yet this is not entirely the case nor, as we shall see, is it without significance. The three overarching issues outlined in this paper as defining the discipline are certainly replicated in the collective objectives of applied linguistic publications but the wording of the titles of the journals foregrounds one facet of the field which is not immediately apparent in the statutes of associations: applied linguistics is very often concerned with language learning and teaching, and the job of the applied linguist is, therefore, one of mediating between such theory as may be connected with language pedagogy and its realisation in the classroom. Some of the applied linguistic journals cited above leave little doubt regarding the activites that they are engaged in. For example, English Language Teaching Journal (ELTJ), Language Learning and TESOL Quarterly are manifestly concerned with issues in second and foreign language (L2) pedagogy. However, it is not quite so apparent that a publication which is almost invariably referenced as simply IRAL is in fact called The International Review of Applied Linguistics in Language Teaching -to give it its full titleeven though the acronym makes no reference to language teaching. Similarly, neither Language Learning nor System are referred to in their subtitled-entirety which makes it clear that the former is, A Journal of Applied Linguistics, and the latter, An International Journal of Educational Technology and Applied Linguistics. Nor would outsiders to the discourse community of applied linguistics perhaps realise that System specifically focuses on "the problems of foreign language teaching and learning" and that the Modern Language Journal is not so much a publication which is concerned with language as the title might imply but "devoted to research and discussion about the theory and teaching of foreign and second languages." Moreover, the aims of ELTJ may amount, for some in the field, to a concise definition of the work of the applied linguist: "ELTJ seeks to bridge the gap between the everyday practical concerns of ELT professionals and the related disciplines of education, linguistics, psychology and sociology." One of the acknowledged flagships in the field, Applied Linguistics, with its long list of eminent American-British editors and distinguished advisory boards, is unequivocal in its view of applied linguistics "as the study of language and language-related problems in specific situations in which people use and learn languages," and it is perhaps not by chance, therefore, that it goes on to list ten broad areas of study starting with "first and second language learning and teaching" and continuing with: critical linguistics; discourse analysis; language and education; language planning; language testing; lexicography; multilingualism and multilingual education; stylistics and rhetoric; and translation. It should be said that this list is by no means exhaustive -AILA, for example, lists twenty-five "scientific commisions" and in Spain, AESLA conference proceedings are routinely divided into a similar number of sections-but the repetition of the word "education" in the categories that appear in Applied Linguistics and the implication of educational applications in "language planning and testing" suggests that language learning and teaching in all its manifestations is perhaps closer to a superordinate than a discrete category.
Certainly, with regard to academic output in the field, much the greater part of applied linguistics is concerned with language teaching and learning. Indeed, Crystal (1991Crystal ( /1980 notes that "sometimes the term is used as if it were the only field involved". Cook and Seidlhofer (1995: 7) support this claim observing that in spite of "the potentially wide scope of the field, it is with language teaching and learning, and particularly English language teaching and learning, that many works on applied linguistics are primarily concerned." The elevated importance of English language teaching (ELT) above other languages is, it seems, a logical consequence of "the emergence of English as a global lingua franca" (Graddol and Meinhof 1999: 1). Brown (1987: 147) observes that the term applied linguistics carries with it a transatlantic nuance: "the common British usage of the term… is almost synonymous with language teaching." Not many applied linguists on either side of the Atlantic would disagree with this claim since the evolution of British applied linguistics is, as we shall see, intricately bound up with L2 teaching. Nevertheless, to a lesser or greater extent, the essence of the field on a global level is highlighted in Kaplan and Widdowson's (1992: 77) review of the ERIC system (an international database sponsored by the US government) over two decades revealing that approximately 45% of entries for applied linguistics "are in some way concerned with language teaching." And whilst Brown's assertion suggests that this figure might be significantly higher in Britain, it appears to be one which holds true in Spain. In the introduction to XIII AESLA conference proceedings, for example, Otal et al (1997: 18) draw attention to the fact that more than half of the 100 papers presented in the congress were related to "las areas de enseñanza y aprendizaje de lenguas." 2 An examination of subsequent AESLA conference proceedings up to the most recent edition in 2001 reflects a similar picture.
Thus, one might draw certain interim conclusions about the current state of the field and its investigative orientations. To start with, despite persistant doubts surrounding the long-term existence of applied linguistics both from within and outside the field, the steady growth of the discipline from the 1950s onwards as expressed in the burgeoning number of associations, courses, institutions and journals, is testament to the fact that the academic ground that it occupies is not only a reality but that "institutional status has been comprehensively conferred upon it" (Widdowson 2000a: 3). In the second place, the scope of the enquiry has broadened considerably over the years but the reality of applied linguistics for the majority -a figure approaching 50%-of those engaged in the field in Spain is that of an academic pursuit which is intimately related to language teaching and learning. In sum, applied linguistics may be said to be typified as that activity which informs L2 TONY HARRIS 2. More precisely the categories on which this approximation is made are "enseñanza de lenguas" (31 papers) and "adquisición de lenguas" (23 papers). Although one might argue that the latter is not always directly related to L2 pedagogy, one might also argue that other "panels" could have been brought into the approximation. For example, 2 of 4 papers on both the "sociolingüística" and "estilística y retórica" panels refer directly to L2 pedagogy. pedagogy and, therefore, derives its theoretical underpinnings from linguistics as well as a wide spectrum of neighbouring disciplines connected with language teaching and learning. The three conceptual struts identified in this study-linguistics, the practical focus of the field and its multidisciplinarinessare clearly visible in this organisational scheme although the relationship between them is a hierarchical one with the practical activity of L2 teaching forming the focal point at the apex of the triangle.
LINGUISTICS AND LANGUAGE TEACHING
One issue which has generated a good deal of debate in applied linguistics is connected with the name of the enquiry. As Widdowson (1980: 165) observes: "One does not have to embrace extreme Whorfian doctrine to recognize that how a thing is called can have a critical effect on how it is conceived." Whilst this is nothing if not a truism, Henry Widdowson, one of the leading lights in the field, has over the years drawn attention to the fact that the shaping of the discipline owes as much to who describes it as how it is described. Linguists as well as applied linguists have attempted to define the field but Widdowson argues that linguistic descriptions are misconceived.
From the outset, it is important to underscore the fact that much of the polemic surrounding the issue of language in applied linguistics has been contextualised within its most common application which, as we have noted above, is in L2 teaching and especially ELT. Under these terms of engagement, the central question for Widdowson now as then has revolved around the issue of how far "linguistic descriptions can adequately account for their reality for learners and so provide a point of reference for the design of language courses" (Widdowson 2000b: 21). Whether linguists have themselves advanced descriptions of language for pedagogical consumption or whether foreign language teaching has looked to theoretical linguistics to provide it with models of language, applied linguists like Widdowson (2000a: 3) would argue that linguistics as an academic enquiry has in such cases "breached its traditionalist formalist limits". He (2000b: 29) continues: "Linguists have authority in their own domain. They describe language on their own terms and in their own terms. There is no reason why they should assume the responsibility of acquiring expertise and authority in the quite different domain of language pedagogy." To be sure, the two fields are derived from sharply divided linguistic traditions. Theoretical linguistic research has concerned itself with language as an internalized or abstract construct; whilst applied linguistics has examined its external manifestation in social contexts and very often with reference to how it might serve the L2 learner. Thus, broadly speaking, the former conceives language as "langue" rather than "parole" (Saussure 1916); "competence" as opposed to "performance" (Chomsky 1965); and, more recently, "I-language" in contrast to "E-language" (Chomsky 1986). In short, linguistics is about the theory of language while applied linguistics is about its practice, or more specifically, applying suitable descriptions of language for use in L2 pedagogy.
However, the practical activity of L2 teaching also comprises an integral part of the history of theoretical linguistics. It was, as Gleason (1965: 49) reminds us, structural linguists like Bloomfield and Fries, who "were called in to prepare class materials for the (then) 'Army Method'" -later to be called the audiolingual methodduring the Second World War and "since the war linguists have been increasingly involved in applied linguistics." Thus, after the war, theoretical linguistics emerged as the "new mentor discipline … [which] … replaced literature and education as the research base for foreign language teaching and learning" (Kramsch 2000: 313). The influence that linguists had on L2 teaching at this time is not in question but Kramsch overstates her argument in suggesting that this was in any way a "new" philosophical trend. Leonard Bloomfield, for example, was "committed to the idea that his discipline should find a useful role in the community" (Howatt 1984: 265) and this is wholly evident in his earlier works which were published decades before the popularization of audiolingualism. Moreover, one only has to turn back the pages of language teaching history to come across linguistic scientists of the ilk of Henry Sweet, Otto Jespersen and Harold Palmer to realise that this relationship between theory and practice is one which is built on years of tradition. There are echoes of such a notion in Titone's (1968: 49), Teaching Foreign Languages: An Historical Sketch and his citing of Jespersen's "indebtedness to a longer series of linguists" in the preliminary pages to the Danish linguist's How to Teach a Foreign Language (1947).
The widespread success that audiolingualism was to enjoy for a twentyyear period after the war was only matched by the severity of the criticism against it as changes in linguistic theory in the early 1960s and the emergence of psycholinguistics at about the same time laid bare the shortcomings of an L2 teaching methodology which lacked what Wilga Rivers (1964: 163) referred to as "a full awareness of the human factors" involved in communication.
Rivers' attack was fuelled by the publication of Chomsky's (1957) Syntactic
Structures and, two years later, his lancing review of B.F. Skinner's Verbal Behavior, collectively fractured the two philosophical pillars on which audiolingualism rested: "the linguistic idea that language is purely a set of sentence patterns (structural linguistics) and the psychological idea that language learning was just habit formation (behaviorism) (Brown Mitchell and Ellingson Vidal 2001: 30)." Chomsky argued that linguistic behaviour was innate rather than learned and his model of transformational-generative grammar was fundamental to the decline of audiolingualism and was, with limited success, applied to language teaching in transformation drills, which sought to elicit the underlying deep structures (what is in the mind) of sentences from an examination of surface structures (what is spoken or written). Chomsky himself was 'rather sceptical' that his theories had any relevance for the teaching of languages (Howatt 1984: 271), although once again linguistic insight had found its way into L2 pedagogy. Yet the demise of audiolingualism and subsequent attempts by advocates of Chomskyan linguistics to isolate language from its social and psychological context, did succeed in revealing the central flaw of applying linguistics directly to pedagogical contexts that, in Corder's (1973: 29) words, language learning is about "people who can talk the language rather than talk about the language"; or as Allen (1974: 59) put it, linguistic knowledge "is concerned with a specification of the formal properties of a language, with the 'code' rather than with the 'use of the code'." Applied linguists, it seemed, were suddenly given a more definite role to act as "a buffer between linguistics and language teaching" (Stern 1992: 8) and the misguided directionality perceived by some of putting the solution before the problem was dually expressed in Widdowson's neologism "linguistics applied" (1980: 165) and the wry observation: "You do not start with a model as given and cast about for ways in which it might come in handy." (Ibid 169). Widdowson's comments in 1980, originally delivered in the form of a paper at the 1979 BAAL conference, were certainly well-placed in an audience made up of predominately British as opposed to American applied linguists. By this time, British (as well as some parts of the Commonwealth) applied linguistics was distancing itself from the top-down traditions of North American linguistics applied, which "grew out of the search by linguists (e.g. Bloomfield, Fries) for applications for their theoretical and descriptive interests" (Davies 1993: 17), rather than the bottom-up approach that characterised the field in Britain, which "starts with the practical problems and then seeks theoretical (and/or practical) ways to understand and resolve those problems." In addition, the final years of the 1970s and the opening of the new decade may be thought of as bringing to a close a period of intense activity in applied linguistics which commenced with the advent of Chomskyan linguistics. The effect of Chomsky's ideas on language teaching during these years saw a shift in North American applied linguistics towards cognitive approaches and the work of Burt, Dulay and Krashen was evidence of both a distinct line of enquiry and the emergence of a new academic discipline, namely second language acquisition (SLA). At the same time, in Britain, the principal task of applied linguists, it seemed, was one of self-justification borne of their successive rejection of structuralist and generative linguistic applications to language teaching. If British applied linguists such as Allen, Brumfit, Corder, Mackin, Strevens and Widdowson, inter alia, had been making strong claims that only a mediating discipline with specialist knowledge in linguistics and pedagogy was suitably qualified to define an appropriate model of language for L2 teaching, then what, the question arose, was their solution?
By the early 1980s, an applied linguistic model of language for L2 pedagogy had evolved within the framework of communicative language teaching (CLT). The catalyst for what became popularly known as the 'communicative movement' or 'revolution' is usually attributed to the work of the American sociolinguist Dell Hymes (1972), and his coining of the term communicative competence, which involved not only an internalised knowledge system of linguistic rules (as originally posited by Chomsky in his conceiving of the term 'competence') but, crucially, a pragmatic knowledge which enabled this system to be used appropriately in communicative settings. As Hymes (1971: 10) observed, "These are rules of use without which the rules of grammar would be useless." A rash of publications followed Hymes' initial studies and contributed to the development of CLT. These included Wilkins' (1976) Council of Europe sponsored Notional Syllabuses, which proposed three categories of language organisation (semantico-grammatical, modal meaning and communicative function, subsuming Austin (1962) and Searle's (1969) theory of speech acts); Widdowson's (1978) treatise for CLT, Teaching Language as Communication, which highlighted 'use' rather than 'usage'; Munby's (1978) Communicative Syllabus Design, which recognised the pedagogic possibilities of M. A. K. Halliday's (1972) 'sociosemantic networks' in a 250-item taxonomy of language skills; and Breen and Candlin's 1980 paper, which viewed language as 'communication' in preference to 'code'.
The basic thrust of these works was a movement away from the structural system of language to a focus on its meaning potential as expressed in communicative functions (e.g. apologising, describing, inviting, promising). Moreover, the confluence of British functional linguistics (Firth, Halliday), American sociolinguistics (Hymes, Gumperz, Labov), as well as pilosophy (Austin, Searle) represented "a move away from linguistics as the main or only basis for deciding what the units of language teaching would be" (Lightbrown 2000: 435). It also prefigured the multifaceted discipline that the field of applied linguistics is today and, furthermore, reflected the importance of context in defining its lines of enquiry. Henceforth, American applied linguistics, in contrast to its British counterpart, was characterised by its reference to SLA.
Finally, in 1980, Canale and Swain brought order to "the somewhat confused scene of communicative language teaching" (Stern 1993: 164) with a comprehensive analysis of the linguistic influences in CLT which, significantly, appeared in the opening article of the first edition of Applied Linguistics, and this, together with a later study by Canale in 1983, defined communicative competence by dividing it into four separate categories: grammatical competence, pragmatic competence, discourse competence and strategic competence. Bachman (1990) and Bachman and Palmer (1996) developed further, more complex representations of Canale and Swain's (1980;Canale 1983) model but the original framework is, as Brumfit (2001: 51) remarks, the one which "has become the preferred basis for subsequent discussion" on communicative competence as a goal in L2 pedagogy.
CURRENT TRENDS IN L2 PEDAGOGY: CONFIRMING THE PATTERN
CLT may not in itself provide the methodological blueprint for L2 pedagogy that it did ten or fifteen years ago but communicative competence is still "the most widely developed metaphor in foreign language teaching" (Brumfit 2001: 47). In terms of language presentation, recent investigation emerging from studies in SLA has pointed to "the need for direct instruction and corrective feedback" (Pica 2000: 11) thereby acting as a palliative for stronger versions of CLT which have (over)emphasised induction at the input stage. Current L2 teaching methodology is as a result more eclectic in its tendency to "incorporate traditional approaches, and reconcile them with communicative practices" (Ibid: 15).
Yet, like the field of applied linguistics itself, language teaching is "determined by fashion" (Davies 1993: 14), and whilst there are those who would claim that no one method can account for the infinite variety of learner needs (Kumaravadivelu 1994), the persistent dependency of pedagogy on linguistic descriptions has given rise to a new generation of course books whose language content is determined by data derived from language corpora. Sinclair (1991), amongst others, has mounted a strong case for the inclusion corpus-based research in L2 pedagogy. Widdowson (2000a: 7), predictably perhaps, dismisses the pedagogic application of corpus linguistics as linguistics applied, "the textually attested ... not the encoded possible, nor the contextually appropriate" which ignores the classroom reality for learners. Whether one chooses to "resist the deterministic practices of linguistics applied" (Widdowson 2000a: 23), a perceivable uncertainty hangs over applied linguistics, an enquiry which, one might argue, has stepped into an academic breach of its own creation -the gap between the theory and practice of language teaching -but which cannot yet agree upon some of the fundamental issues girding up its own existence such as "the most appropriate model of language which should underpin FL pedagogic grammar" (Mitchell 2000: 297).
Language teaching is inextricably linked to the needs of the real world. It is "a social and often institutional activity" (Cook and Seidlhofer 1995: 8) that is moved by government decree and commercial interests and these in turn are informed by pedagogical theories and insights. Educational policy makers and international publishing houses look to the linguistic sciences for solutions and innovation and when theoretical linguistic insight is adapted to language teaching, the role of applied linguistics is put in doubt. As Grabe and Kaplan (2000: 3) remark in the introduction to the Annual Review of Applied Linguistics 20, aptly subtitled, Applied linguistics as an emerging discipline, "full disciplinary acceptance will only occur to the extent that applied linguistics responds to wider societal needs and its professional expertise is valued by people beyond the professional field." Linguistics, as we have seen in this brief historical overview, has always formed a part of applied linguistics though it has sometimes assumed an importance which is out of step with its real value to L2 teaching. But this is not a revelation. A generation ago, Munby (1978: 6), with reference to specifying a model of communicative competence for L2 pedagogy, added his name to a growing list of scholars who had questioned the centrality of theoretical linguistics in language teaching: "It may well be a case that a theory of linguistics is neither necessary nor sufficient basis for such a study." This is essentially the point that Brumfit (1980: 160) was making in his observation that "language ... operates simultaneously in several dimensions at once" and that if the framework for enquiry is conceived within purely linguistic terms in accordance with the name applied linguistics, then we run the risk of becoming "prisoners of our own categorisations." More recently, Spolsky (2000: 157) has reiterated these sentiments in his allusion to being "trapped … by a too literal assumption that applied linguistics needed only linguistics." His own preferred term, Educational Linguistics, along with the label which is gaining currency in faculty departments and on degree programmes, Applied Language Studies, recognises that "there is more to be known about language that is applied than just linguistics" (de Bott 2000: 224).
TOWARDS A CONCLUSION
The utility of linguistics to L2 pedagogy is a debate which has engaged two generations of applied linguists. It comprises part of the field's past traditions, its present trends and future directions forming a continuum which describes development and change in applied linguistics. It is a debate which can be located at each end of the continuum and one in which the rhetoric used, as well as some of its principal purveyors, appears to have evolved little over time to the extent that one might, with some justification, question how far applied linguistics as an enquiry has moved forward. It is, perhaps, the answer to this question which continues to shroud the field in uncertainty.
There are at least two issues that may have protracted the linguistics-inapplied-linguistics argument and neither is directly connected with the academic debate. The first is concerned with the institutional status of applied linguistics specifically within the academic hierarchy. Kramsch (2000: 319) notes that "there is some confusion about the academic and scholarly respectability of a field that is often viewed as having to do exclusively with teaching, not research." Kramsch frames her discussion within a North American context, but there are certainly resonances of such a stance in British and Spanish academic institutions. And, as we have seen here, applied linguistic associations like BAAL and AILA appear to be reticent to openly state that the most common application of research in the field is in L2 teaching and learning. These associations cast a wide net to define what it is that they do. But to describe a field as 'multidisciplinary' is to describe almost all areas of academic enquiry; to describe an 'applied' science as 'practical' is to define it using a synonym. Neither is satisfactory. Conversely, a field which is conceived in terms of discourse analysis, stylistics, psycholinguistics, sociolinguistics, sign linguistics and deafness studies, language pathology and therapy, human rights in the language world (items drawn from AILA's list of research areas; 2002) acquires instant cachet and academic robustness. But the tenuous relationship that some of these disciplines have to the more central concerns of applied linguistics, like L2 pedagogy, is misleading to the point of misrepresentation.
The second issue is engendered in Kramsch's (2000: 317) comment: "The field of Applied Linguistics speaks with multiple voices, depending on whether one's original training was in linguistics, anthropology, psychology, sociology, education, or literature." The most frequently cited 'voices' in applied linguistics carry with them an authority which determines present orientations and future directions in the field. But these voices too speak in accents which betray their academic origins and if applied linguistics really is defined, as many scholars have pointed out, by its context, then it may be pertinent to question the absolute value of those conceptual frameworks which have persisted in the field. | 7,253.4 | 2002-05-29T00:00:00.000 | [
"Linguistics"
] |
Apoptosis of Hepatocellular Carcinoma Cells Induced by Nanoencapsulated Polysaccharides Extracted from Antrodia Camphorata
Antrodia camphorata is a well-known medicinal mushroom in Taiwan and has been studied for decades, especially with focus on anti-cancer activity. Polysaccharides are the major bioactive compounds reported with anti-cancer activity, but the debates on how they target cells still remain. Research addressing the encapsulation of polysaccharides from A. camphorata extract (ACE) to enhance anti-cancer activity is rare. In this study, ACE polysaccharides were nano-encapsulated in chitosan-silica and silica (expressed as ACE/CS and ACE/S, respectively) to evaluate the apoptosis effect on a hepatoma cell line (Hep G2). The results showed that ACE polysaccharides, ACE/CS and ACE/S all could damage the Hep G2 cell membrane and cause cell death, especially in the ACE/CS group. In apoptosis assays, DNA fragmentation and sub-G1 phase populations were increased, and the mitochondrial membrane potential decreased significantly after treatments. ACE/CS and ACE/S could also increase reactive oxygen species (ROS) generation, induce Fas/APO-1 (apoptosis antigen 1) expression and elevate the proteolytic activities of caspase-3, caspase-8 and caspase-9 in Hep G2 cells. Unsurprisingly, ACE/CS induced a similar apoptosis mechanism at a lower dosage (ACE polysaccharides = 13.2 μg/mL) than those of ACE/S (ACE polysaccharides = 21.2 μg/mL) and ACE polysaccharides (25 μg/mL). Therefore, the encapsulation of ACE polysaccharides by chitosan-silica nanoparticles may provide a viable approach for enhancing anti-tumor efficacy in liver cancer cells.
Introduction
Medicinal mushrooms have become increasingly popular in recent years for their potential in disease prevention, especially in tumor inhibition [1,2]. Antrodia camphorata is a well-known mushroom that has been used as an herbal medicine for centuries in Taiwan. This mushroom has been used in the treatment of various diseases; e.g., diarrhea, abdominal pain and hypertension [3,4]. Moreover, the pharmacological properties of A. camphorata, such as hepatoprotective properties [5,6] and anti-tumor activities [7][8][9], have been stated and summarized. Generally, polysaccharides and triterpenoids are the major bioactive components in medicinal mushrooms, and the polysaccharides extracted from A. camphorata were found to have anti-hepatitis B surface antigen [10] and anti-tumor effects [3,11].
Rapid development in nanotechnology engenders a variety of nanoparticles with novel bioactive ingredients delivered into cancer cells [12]. Natural and synthetic polymers as well as inorganic materials could be used for constructing functional nanoparticles. Among these materials, silica and chitosan have received more attention because of their abundance in nature. Chitosan is a cationic polysaccharide with attractive chemical and biological characteristics for pharmaceutical purposes [13] and has been incorporated into various carrier systems for drug delivery to improve their absorption and targeting characteristics [14][15][16][17][18][19][20].
Silica nanostructures allow one to encapsulate biomolecules and therefore deliver multiple clinical functions. Most researchers synthesize silica with various sizes, morphology and surface functional properties of silicon alkoxide by the sol-gel process [21,22]. In the past decade, a variety of silicate-based biohybrid materials containing silica and biopolymers have been produced and are widely used in bio-encapsulation applications due to the low environmental impact, low toxicity and good biocompatibility [23]. We recently demonstrated that chitosan was capable of forming composite nanoparticles with silica [24]. Moreover, the presence of chitosan in the composite, i.e., chitosan-silica nanoparticles, significantly reduced the cytotoxicity of silica nanoparticles [25]. Nevertheless, little information is currently available on changes in the efficacy of bioactive ingredients encapsulated by biopolymer-silica hybrid nanoparticles.
The aim of this study was to evaluate the effects of A. camphorata extract (ACE) polysaccharides, ACE polysaccharides encapsulated in chitosan-silica nanoparticles (ACE/CS) and encapsulated in silica nanoparticles (ACE/S) on apoptosis of cells in a human liver cancer cell line (Hep G2). We investigated the cell cycle, mitochondrial membrane potential, Fas/APO-1, caspase-8, caspase-9 and caspase-3 signaling molecules, which are strongly associated with the apoptosis signal transduction pathway and are related to the responses of tumor cells treated with anti-cancer compounds.
Preparations of ACE polysaccharides, ACE/CS and ACE/S
Powdered fruiting bodies of cultivated A. camphorata (approximately 70 g) were soaked in 500 mL ethanol (70%) for 12 h. The suspension was centrifuged at 5,520× g for 30 min to remove the insoluble matter, and the supernatant was concentrated with a vacuum evaporator until the volume reached 100 mL. The above solution was dialyzed (10 KDa MWCO) overnight, filtered with a filter paper (pore size 0.45 μm) and freeze-dried to powder form for further analysis. For nanoparticle preparation, the qualified chitosan samples purchased from a commercial supplier were analyzed for the degree of deacetylation (DD) up to 81% and molecular weight (Mw), which was approximately 200 kDA. ACE/CS was prepared as previously described [24]. Briefly, the sodium silicate was dissolved in 30 mL buffer (0.05 M sodium acetate solution) to prepare a 0.55% (w/w) solution (pH = 6.0). After 10 min agitation in a magnetic stirrer, 6 mL ACE polysaccharides solution (0.1% w/w) and 3 mL chitosan solution (0.55% w/w) were added. The resulting solution was mixed completely and set aside for particle synthesis (4 h) without disturbance. ACE/CS was collected by centrifugation at 5,520× g for 30 min. For ACE/ S preparation, silica solution at the same concentration described above without chitosan solution was used and ACE/S was collected after 12 h of the particle synthesis process.
Cell viability analysis
Hep G2 cells were seeded in 96-well plates at a density of 2.0 × 10 5 cells/well. After incubation for 8 h, the medium was removed and replaced by 20 μL of different concentrations of ACE polysaccharides, ACE/ES, ACE/S and nanoparticles without loading, which were suspended in deionized water and ultrasonicated for 1 h to prevent agglomeration. Controls were cells treated with an equivalent volume of serum-free medium instead of suspensions. Cells were further incubated for 24 h or 48 h and viability was assessed by using an MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-di-phenyltetrazolium bromide) test [26]. In brief, fresh media with 100 μL of 5 mg/mL MTT stain (Sigma-Aldrich in vitro toxicology assay kit) was added into each well and incubated for 4 h. The media/stain was drawn out and purple color crystals were dissolved using acidic isopropyl alcohol. The absorbance in each well was measured at 570 nm using a 4294B Microplate Reader (Tecan Sunrise, American Instrument Exchange Inc. USA). All experiments were repeated 3 times independently to ensure reproducibility and data were acquired in triplicate (n = 3). Cell viability was calculated with the following formula: Where control stands for culture without any treatment.
Lactate dehydrogenase release assay
Cells at a density of 2.0 × 10 5 cells/well were treated in the same protocol described above except at the indicated concentrations of suspensions. After 48 h incubation, the cultured cells were centrifuged at 430× g for 5 min and the cell medium was transferred to a new 96-well plate (50 μL/well). Upon the addition of the lactate dehydrogenase (LDH) reaction mixture (Promega, CytoTox 96R non-radioactive assay kit) to the wells (50 μL/well), they were kept in the dark for 30 min and then 1 N HCl (25 μL/well) was added to each sample to terminate the reaction. The resulting absorbance was measured at 490 nm. All experiments were repeated 3 times independently to ensure reproducibility and data were acquired in triplicate (n = 3). Control and positive control experiments were performed with medium only and with 0.1% (w/v) Triton X-100, with cytotoxicity identified at 100%, respectively. LDH release (%) was calculated using following equation: Gel electrophoresis examination for DNA fragments Hep G2 cells were seeded at the density of 2.0 × 10 5 cells/mL in 10-cm dishes containing medium supplemented with 2% FBS and incubated for 4 h. Tumor necrosis factor-related apoptosisinducing ligand (Trail), for positive control, ACE polysaccharides, ACE/CS and ACE/S suspensions were added to the dishes, protected from light and incubated for 48 h. Cells were harvested by centrifugation at 250× g after incubation, lysed in buffer solution and centrifuged at 14,000× g for 10 min. The fragmented DNAs in the supernatant were extracted and analyzed with 2% agarose gel electrophoresis containing 0.1 μg/mL ethidium bromide. The stained DNA fragments were imaged with Imagemaster VDS (Pharmacia, Uppsala, Sweden).
Cell cycle
Hep G2 cells were seeded at the density of 1.0 × 10 6 cells per 60-mm dish and treated with ACE polysaccharides (25 μg/mL), ACE/CS (ACE polysaccharides = 13.2 μg/mL) and ACE/S (ACE polysaccharides = 21.2 μg/mL) for 24 h and 48 h, and then were harvested and fixed in ice cold 70% ethanol (v/v). The cells were further washed with phosphate-buffered saline (pH 7.2), incubated with 25 μg/mL RNase A at 37°C for 15 min and stained with 50 μg/mL propidium iodide (PI) for 30 min in the dark. In the flow cytometry assay, a FACSCalibur Flow Cytometer (Becton Dickinson, BD Biosciences, CA, USA) with an excitation wavelength 488 nm and an emission wavelength 630 nm was used, and data were acquired using Cell Quest software based on a minimum of 10 5 cells per sample.
Mitochondrial membrane potential analysis
Again, the Hep G2 cells were treated with ACE polysaccharides (25 μg/mL), ACE/CS (ACE polysaccharides = 13.2 μg/mL) and ACE/S (ACE polysaccharides = 21.2 μg/mL) at 37°C for 48 h. After being resuspended at a density of 1.0 × 10 6 cells/mL in PBS, the cells were stained with 25 μM Rhodamine 123 for 30 min at 37°C and the membrane potential (ΔC) was detected by a FACSCalibur Flow Cytometer with excitation at 488 nm and emission at 520 nm based on a minimum of 10 5 cells per sample.
Measurement of intracellular reactive oxygen species
Formation of reactive oxygen species (ROS) in cells was evaluated according to the modified method adapted from Chou and Lin [27]. Briefly, Hep G2 cells were suspended in DMEM/ 10% FBS with or without ACE polysaccharides (25 μg/mL), ACE/CS (ACE polysaccharides = 13.2 μg/mL) and ACE/S (ACE polysaccharides = 21.2 μg/mL) at 37°C for 48 h. After trypsinization, the cells were washed, resuspended at a density of 1.0 × 10 6 cells/mL in PBS and then kept in a dark chamber containing DCFDA (20 μM) for further flow cytometric analysis (488 nm excitation/520 nm emission).
Assay for Fas (apoptosis antigen 1, APO-1/cluster of differentiation 95, CD 95)
Hep G2 cells were treated with or without ACE polysaccharides, ACE/CS and ACE/S in the same concentrations described above at 37°C for 48 h, trypsinized, washed, resuspended at a density of 1.0 × 10 6 cells/mL in PBS and incubated with phycoerythin (PE)-conjugated Fas monoclonal antibody UB2 (1 pg/mL) at 4°C for 1 h. Samples were then valuated by a FACSCalibur Flow Cytometer with excitation 488 nm/emission 580 nm and data were acquired using Cell Quest software based on a minimum of 10 5 cells per sample.
Activity of caspase-3, caspase-8 and caspase-9
Activities of caspase-3, caspase-8 and caspase-9 were determined with a CaspGLOW Fluorescein Active Caspase Staining Kit (BioVision, USA). In brief, cells were pre-incubated with ACE polysaccharides (25 μg/mL), ACE/CS (ACE polysaccharides = 13.2 μg/mL) and ACE/S (ACE polysaccharides = 21.2 μg/mL) for 48 h and counted to 1.0 × 10 6 cells/60-mm dish. After centrifuging at 400× g for 10 min, the cell pellets were lysed and kept on ice for 10 min. Supernatants were collected after centrifuging at 10,000× g at 4°C for 3 min and DEVD-pNA (Asp-Glu-Val-Asp p-nitroaniline, for caspase-3), IETD-pNA (Ile-Glu-Thr-Asp p-nitroaniline, for caspase-8), LEHD-pNA (Leu-Glu-His-Asp p-nitroaniline, for caspase-9) being added to a final concentration of 50 mM. Each sample was further incubated at 37°C for 1 h in a water bath and fluorescence was detected by a FACSCalibur Flow Cytometer with excitation at 485 nm and emission at 520 nm based on a minimum of 10 5 cells per sample.
Statistical analysis
All data from the experiments were expressed as the mean ± SD and shown with error bars. A one-way ANOVA followed by Duncan's test was used for significance testing, using a p value of 0.05 (SPSS 11, SPSS Inc., Chicago, IL).
ACE polysaccharides, ACE/CS and ACE/S induce Hep G2 death and LDH release
The death of Hep G2 cells was induced by ACE polysaccharides, ACE/CS and ACE/S in a dose and time-dependent manner (Fig 1). The results showed a slight decrease in cell viability at 24 h but became very significant after 48 h incubation, especially in the ACE/CS group (Fig 1b). Meanwhile, the viability of Hep G2 cells only decreased to approximately 85% in chitosan-silica nanoparticle (CSNP) or silica nanoparticle (SNP) treatments, which without ACE polysaccharides were loaded even at the highest concentration of 667 μg/mL for 48 h incubation (Fig 1c). Regarding morphology, cell shrinkage was obviously observed in phase-contract micrographs when exposed to ACE/CS (Fig 2a-2d) and a positive control (Fig 2e), but was insignificant in the CSNP treatment (Fig 2g). The effects of ACE polysaccharides and synthesized nanoparticles on Hep G2 cell membrane integrity, determined by a LDH assay, are shown in Fig 3a-3c. ACE polysaccharides, ACE/CS and ACE/S caused cell membrane damage in a dose-and timedependent manner, especially at the highest concentration of 166.67 μg/mL ACE polysaccharide treatments for 48 h (> 60% LDH released). At the corresponding concentration; e.g., 166.67 μg/mL, ACE/ES and ACE/S (containing 13.2 μg/mL and 21.2 μg/mL ACE polysaccharides, respectively) showed much less membrane damage (< 20% LDH released ; Fig 3b and 3c). The effects of nanoparticles without ACE polysaccharides on Hep G2 cell membrane integrity are shown in Fig 3d. None of the chitosan-silica composite treatments made a significant membranolytic effect up to 48 h even at a concentration of 667 μg/mL (< 3.7% LDH released). In contrast, silica nanoparticles were found to cause slight (< 6.5% LDH released) membrane damage in the same condition.
ACE polysaccharides, ACE/CS and ACE/S caused DNA fragmentation and apoptosis
The characteristic DNA laddering of apoptosis was carried out in Hep G2 cells treated with ACE polysaccharides, ACE/CS and ACE/S at the concentrations of 25 μg/mL, 13.2 μg/mL and 21.2 μg/mL, respectively, by using agarose gel electrophoresis. The results showed a ladder-like pattern of multiple DNA fragments consisting of approximately 180-200 base pairs (Fig 4). In cell cycle analysis, flow cytometry for quantifying the DNA fragmentation extent was used and the apoptotic cells were identified as the hypodiploidy peak (sub-G 1 ) on the PI histogram. The percentage of apoptotic cells was 1.42% in the control group and reached 1.46% after treatment with ACE polysaccharides (25 μg/mL) in 24 h (Table 1 and S1 Fig). When cells were incubated with ACE/S (containing 21.2 μg/mL ACE polysaccharides) for 24 h, the sub-G 1 phase population slightly increased to 2.76%, whereas it increased to 2.88% for the cells incubated in ACE/ CS (containing 13.2 μg/mL ACE polysaccharides; Table 1 and S1 Fig). After 48 h incubation, the sub-G 1 phase population approached 5.18% in the control group and reached 20.98% or 20.56% in ACE polysaccharides or ACE/CS group, respectively, but only increased to 10.86% for cells incubated with ACE/S (Table 1 and S1 Fig). In contrast, after Hep G2 was incubated with silica and chitosan-silica nanoparticles without ACE polysaccharides even at the highest concentration (667 μg/mL) for 48 h, the sub-G 1 phase population was just slightly increased to 6.54% and 9.16% (Table 1).
ACE polysaccharides, ACE/CS and ACE/S decreased the ΔΨm
To investigate the Hep G2 cell mitochondrial apoptotic events induced by ACE polysaccharides, ACE/CS and ACE/S, we analyzed the values of the mitochondrial membrane potential (ΔCm). As shown in Table 2
ACE polysaccharides, ACE/CS and ACE/S increased the ROS generation
The DCF fluorescent response, which indicates the level of ROS, was increased significantly in Hep G2 cells after incubation with ACE polysaccharides, ACE/CS or ACE/S, as shown in Table 2 and S3 Fig. After 48 h of incubation, the ROS formation increased to 18.76% in the ACE polysaccharide treatment relative to controls and increased to 7.06% and 13.36% in ACE/ CS and ACE/S treatments, respectively. The effects of SNP or CSNP on ROS generation are Hep G2 cells were seeded at the density of 1.0 × 10 6 cells per 60-mm dish and treated with ACE polysaccharides (25 μg/mL), ACE/CS (ACE polysaccharides = 13.2 μg/mL) and ACE/S (ACE polysaccharides = 21.2 μg/mL). The nanoparticles without ACE polysaccharides (CSNP and SNP) were also examined. The cells without any treatment were defined as controls. Flow cytometry with excitation wavelength 488 nm/emission wavelength 630 nm was used for the analysis of PI-stained DNA, and data acquired from Cell Quest software based on a minimum of 10 5 cells per sample were analyzed using a one-way ANOVA followed by Duncan's test. The results are shown as the mean ± standard deviation (S.D.; n = 3), which without a common superscript ( a , b and c ) at the same row were significantly different (p<0.05 also shown in Table 2 and none of them caused significant ROS production (< 3.52%) in Hep G2 cells even at a concentration of 667 μg/mL for 48 h incubation.
ACE polysaccharides, ACE/CS and ACE/S increased the expressions of Fas/APO-1 and caspase activities
The Fas/FasL system is the most important apoptosis initiator in cells and tissues. In this study, the expression of Fas/APO-1 was detected in Hep G2 cells after treatment with ACE polysaccharides, ACE/CS and ACE/S. The Fas/APO-1 content increased to 47.6% relative to controls after incubation with ACE polysaccharides, ACE/CS or ACE/S (
Discussion
In our previous study, the major composition of ACE was polysaccharides with a (1!3)-β-Dglucan structure, while the average particle sizes and encapsulation efficiencies of ACE/CS and ACE/S were 210 ± 13.3 nm and 294 ± 25.7 nm and 85.7% and 76.4%, respectively [28]. Many functions of polysaccharides extracted from A. camphorate have been identified [3,10,11]. In this study, we revealed ACE polysaccharides, ACE/CS and ACE/S all showed interesting cytotoxicity activities against Hep G2 cells, i.e., antitumor activity, especially with the ACE/CS treatment. Therefore, we further examined the cell morphology by phase-contrast microscopy in the ACE/CS group and cell shrinkage was obviously present. Apparently, the nano-encapsulation of ACE polysaccharides provided a viable approach for enhancing antitumor efficacy in liver cancer cells. Moreover, previous studies showed that polysaccharides extracted from A. camphorata had no cytotoxic effect on human leukemic U937 cells, human endothelial cells and normal hepatocytes cells [11]. Our earlier study also reported ACE polysaccharides, ACE/ CS or ACE/S were nontoxic to three human normal cell lines; i.e., human skin fibroblast CCD-966sk cells, human normal fibroblast WS1 cells and human lung embryonic MRC-5 cells [28]. These studies suggested that AEC polysaccharides, ACE/CS and ACE/S are only toxic to cancer cells, such as Hep G2 cells, but not to normal cells. In contrast, the cytotoxicity of chitosan-silica or silica nanoparticles without ACE polysaccharides loaded on Hep G2 cells was insignificant, and this result was consistent with the results of our previous study, which demonstrated that silica nanoparticles became more toxic in normal human fibroblast cells than in cancer cells at high concentrations [25]. LDH leakage from cells is the explicit evidence of cell membrane damage and refers to one of the physiological features of necrotic cell death [29]. In the present study, we found LDH leakage was increased as the concentrations of ACE polysaccharides, ACE/CS and ACE/S treatments increased in the Hep G2 cells. These results were in agreement with the data obtained from the MTT assay (Fig 1). Meanwhile, 166.67 μg/mL of ACE/CS and ACE/S (containing 13.2 μg/mL and 21.2 μg/mL ACE polysaccharides, respectively) induced smaller membrane damage (< 20% LDH released) than the cytotoxicity (< 40% cell viability) released at the corresponding concentrations. The slight difference between cell viability and membrane damage could result from apoptosis and necrosis that existed concurrently when induced by nanoparticles [30,31]. However, at the highest concentration (166.67 μg/mL), ACE polysaccharides caused a massive (> 60% LDH released) membranolytic effect on Hep G2 cells in 48 h; hence, we deduced that high concentrations of ACE polysaccharides may induce cell necrosis. Therefore, the concentrations of ACE polysaccharides in the following tests were maintained at the level below the cytotoxic effect to evaluate the apoptosis mechanisms in Hep G2 cells.
Fragmentation of DNA is one of the most important and irreversible events in apoptosis. To further examine cell apoptosis that was not affected by necrosis, DNA fragmentation and a cell cycle assay were performed in this study. The results revealed that ACE polysaccharides, ACE/CS or ACE/S treatments could induce DNA degradation and resulted in the appearance of a ladder-like pattern. The cell cycle study also showed that the ACE polysaccharides, ACE/S and ACE/S increased the sub-G 1 phase population of Hep G2 cells in a time-dependent manner. Research has shown that A. camphorata can cause the accumulation of the sub-G 1 population with hypodiploidic DNA content, which is a typical marker of apoptosis and triggers cell death [8]. The assays described above demonstrated that the ACE polysaccharides and resulting nanoparticles were capable of inducing apoptosis and triggering cell death.
In previous studies, A. camphorata could induce apoptosis in cancer cells by triggering the mitochondrial pathway [5]. In this study, the ΔCm of Hep G2 cells decreased dramatically after 48 h of incubation with ACE polysaccharides, ACE/CS or ACE/S, which indicated that ACE polysaccharides and the resulting nanoparticles could induce the apoptosis of Hep G2 cells through damage of the mitochondrial membrane. Meanwhile, the factors of oxidative stress, such as ROS and lipid peroxidation, have been observed in some apoptotic processes [32][33][34]. ROS plays an important role in apoptosis by regulating the activity of certain enzymes involved in the cell death pathway. In our study, the incubation of Hep G2 cells with ACE polysaccharides, ACE/CS or ACE/S induced ROS generation and caused the subsequent apoptosis. However, a previous study revealed that A. camphorata might possess antioxidant properties against ROS generation [35]. This conflict showed the active components in A. camphorata might serve as mediators of the reactive oxygen scavenging system and have the potential to act as both pro-oxidants and antioxidants, depending on the redox state of the biological environment.
The Fas/FasL system is a key signaling transduction pathway of apoptosis in cells and tissues [36,37]. Many publications have highlighted the role of the Fas/FasL system in chemotherapyinduced apoptosis of tumors by up-regulating Fas/APO-1 or FasL [38]. In this study, we found Fas/APO-1 expression in Hep G2 cells dramatically increased after incubation with ACE polysaccharides, ACE/CS or ACE/S, meaning those materials have the ability to induce Fas-mediated apoptosis. In the case of caspases, the proteolytic activities of caspase-3, caspase-8, and caspase-9 on Hep G2 cells increased from 19% to nearly 42% after ACE polysaccharides, ACE/ CS or ACE/S incubation for 48 h. Activated caspase-8 and caspase-9 further initiated the activation of the caspase cascade leading to biochemical and morphological changes associated with apoptosis [39][40][41]. Caspase-3 is a well-known downstream effector of caspase that can be activated by caspase-8 or caspase-9 via different signaling pathways [42]. The results of our study demonstrated that ACE polysaccharides, ACE/CS and ACE/S activated the caspase cascade and resulted in Hep G2 cell death.
The results of the apoptosis assays shown above revealed that ACE polysaccharides, ACE/ CS or ACE/S induced similar apoptosis mechanisms in the Hep G2 cells, but it took a smaller dose of ACE/CS (ACE polysaccharides = 13.2 μg/mL) to reach the same effect than in ACE/S (ACE polysaccharides = 21.2 μg/mL) and ACE polysaccharides (25 μg/mL). This result suggested that nano-encapsulation increased the apoptosis effect of ACE polysaccharides in the Hep G2 cells, and ACE/CS could have more potential than ACE/S. In conclusion, ACE polysaccharides, ACE/CS and ACE/S appear to be promising antitumor agents for liver cancer cells. The encapsulation of ACE polysaccharides by chitosan-silica nanoparticles might provide a useful carrier for antitumor ingredients. | 5,467 | 2015-09-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Adaptive Node Clustering Technique for Smart Ocean under Water Sensor Network (SOSNET)
Smart ocean is a term broadly used for monitoring the ocean surface, sea habitat monitoring, and mineral exploration to name a few. Development of an efficient routing protocol for smart oceans is a non-trivial task because of various challenges, such as presence of tidal waves, multiple sources of noise, high propagation delay, and low bandwidth. In this paper, we have proposed a routing protocol named adaptive node clustering technique for smart ocean underwater sensor network (SOSNET). SOSNET employs a moth flame optimizer (MFO) based technique for selecting a near optimal number of clusters required for routing. MFO is a bio inspired optimization technique, which takes into account the movement of moths towards light. The SOSNET algorithm is compared with other bio inspired algorithms such as comprehensive learning particle swarm optimization (CLPSO), ant colony optimization (ACO), and gray wolf optimization (GWO). All these algorithms are used for routing optimization. The performance metrics used for this comparison are transmission range of nodes, node density, and grid size. These parameters are varied during the simulation, and the results indicate that SOSNET performed better than other algorithms.
Introduction
Oceans cover about 71% of the earth's surface. They have large amount of minerals, oil, vast population of animals and other creatures. Ocean studies has attracted attention for a long time but has received renewed focus in the last decade of the 20th century, when the issues of global warming and climate change surfaced. Smart Ocean is a broad term describing the use of sensors combined with geographic information systems for monitoring the ocean surface [1]. One of the initial efforts in this area is Monterey Bay Aquarium Research Institute (MBARI) Ocean Observing System (MOOS) [2]. This observation system consists of sensor nodes distributed on a wide area throughout the ocean bed. MOOS observatories provide real time data to scientists, which enables them to effectively monitor ocean related phenomena. In addition to the MOOS, Martha's Vineyard Coastal Laboratory and the New Millennium Observatory [3,4] are other such observatories providing similar kinds of services to scientists. These observatories provide huge amounts of data, but unfortunately the data are not interoperable. To overcome this problem, a smart ocean sensors consortium [1] has developed the PUCK protocol, which makes it easy to exchange information among ocean sensors. Since all the data is acquired from sensor nodes deployed under the sea, studies in smart ocean architecture tend to relate to research on underwater sensor networks. A brief introduction regarding the same is given in the following.
Underwater sensor networks (UWSN) have been a focus of research for many years. They have gained significance in oceanic research and provided data for many applications, such as ocean surface monitoring, disaster prevention, habitat monitoring, and surveillance [5]. In comparison to the terrestrial counterparts, the sensors in UWSN are equipped with an acoustic modem, designed to work in an underwater environment, an antenna for transmission, a pressure gauge, and a bladder apparatus for controlling the depth. These sensors form a sensor equipped aquatic swarm (SEAS), which coordinates with sink nodes at the sea surface. Sensor nodes perform monitoring of the underwater data and send critical data to the sink nodes in a multi-hop fashion. The sink nodes are armed with acoustic and radio communication abilities to receive the data from under water sensors and relay them to the outer world. Like any other sensor network, routing is a critical problem for UWSN [6].
Earlier attempts to develop routing protocols for UWSN were inspired by their terrestrial counterparts. Although similar in nature, the UWSN do have some significant variations from wireless sensor networks (WSN), which make it difficult for the routing protocols developed for WSN to be used as they are in UWSN. One of the main reasons is that the WSN routing protocols are designed for radio frequency (RF) waves, which attenuate significantly under water; hence, RF based sensors are ineffective in UWSN. Sensors in UWSN are equipped with acoustic modems that are tailored to work in underwater environments [7].
Early routing protocols proposed for UWSN were following the classical layered protocol architecture, i.e., the protocols developed for different layers, e.g., transport layer and network layer. However, performance of these protocols was dependent on parameters related to the specific layer. However, these protocols provided poor performance overall. The basic reason for the poor performance of these so called non-cross layer protocols is that they did not consider the energy levels, joint optimization, and other factors that are important in overall performance. Hence the focus of recent research is cross layer routing protocols, which use the information received from various layers [8].
In addition to the above-mentioned problems, there are other significant issues that need to be considered for designing an efficient routing protocol for UWSN. One such issue is high propagation delay. As discussed in [9], the propagation delay of acoustic waves is 200,000 times higher than that of RF waves, which poses a great challenge and high delay. At the same time, due to water currents and extreme weather conditions, the noise in UWSN is very high; noise is introduced because of tides, water movement, and the variations on sea current. Another source of noise is various types of machinery, such as ships, water pumps, and power plants. Thus, removal of noise is a big challenge in designing a routing protocol. Geometric spreading, i.e., the spreading of sound energy, is another problem in UWSN. The reason for geometric spreading is expansion of wave fronts. Another problem introduced by the water currents is change of topology of sensors, which has a great effect on the performance of a routing protocol. On the other hand, the bandwidth of acoustic waves is very low, ranging between one and fifty kHz [10]. Further, the transmission power required in UWSN [11] is significantly higher as compared to terrestrial networks. Another problem is introduced by Doppler spread, which creates significant problems in high speed network as it introduces symbol interference [12]. Finally, sensors are subject to corrosion and fouling in UWSN, which results in failure [13].
The focus of this paper is the design of an efficient routing protocol for UWSN. The crux of the proposed routing protocol, SOSNET, is dividing the UWSN into different clusters and developing an effective dynamic clustering scheme for selection of a cluster head responsible for coordination with sink nodes. The selection of the cluster head will depend on available energy in a sensor node. Once the energy of a cluster head decreases from a given threshold, a new cluster head is selected through an election mechanism. The following are the brief contributions of this paper:
•
Optimization of UWSN's routing by employing the evolutionary algorithms. These algorithms have already been successfully used in other similar network types, such as (Mobile ad hoc Networks) MANETs, vehicular ad hoc networks (VANETs), and (Flying ad hoc Networks) FANETs to name a few. Our proposed technique, SOSNET, is a moth flame optimization-based technique that reduces the overall routing cost and makes the UWSN more efficient. • SOSNET's performance has been compared with other state of the art evolutionary algorithms and the results exhibit that the proposed SOSNET provides better results.
The remainder of the paper is organized as follows. Section 2 discusses the related work in this area, Section 3 introduces the key concepts of moth flame optimization, Section 4 provides the detail regarding the proposed algorithm, and Section 5 provides details of the experimental setup. Section 6 provides simulation results and Section 7 concludes the discussion.
Related Work
As with any sensor network, energy preservation is a major goal for UWSN. Many energy efficient routing protocols have been proposed using the depth information. Depth based routing (DBR) [14] is such a scheme that measures the depth of the sensor and based on this information, data from the sensor, which has a lower depth, is sent to the sink nodes. The information of the remaining energy along with depth information is used in energy efficient depth based routing [15]. To mitigate the issue of flooding proposed in DBR, [16] introduces depth based multi-hop routing, which uses multi-hop transmission for reduction of number of packets to be sent to the sink node. Relative distance-based protocol (RDBF) [17] uses a fitness function to decide which node will forward the packet towards the sink. Energy efficient routing protocol (EUROP) [18] introduces a layering concept, which divides the sensor nodes into different layers based on depth information. Delay sensitive depth based routing (DSDBR) [19] is another depth based protocol that uses holding time as a measure to reduce transmission delay.
Clustering is a well-known technique used for conservation of energy in sensor networks. Such a scheme is proposed in [20]. In this scheme two-and three-dimension architectures are proposed, which divide the network into two or three clusters, where the cluster head is responsible for communication with the sink node. Dynamic User Coordinate System (DUCS) [21] is a distributed clustering scheme that divides nodes into different clusters; most nodes are limited to communication within the cluster, whereas the cluster head deals with inter-cluster communication. MUD Client Compression Protocol (MCCP) [22] is yet another protocol that selects cluster heads based on the available residual energy, relative position of the node with the sink, and the energy required by the nodes to the candidate node. LCAD (location based clustering algorithm for data gathering) [23] is another such technique developed for three dimensional UWSN. Nodes are deployed in multi-tier fashion and multiple cluster heads are chosen for communication. A review of these techniques is presented in [21][22][23].
AI based routing algorithms have gained popularity in many fields of networking. Correspondingly many Ai based routing algorithms have been proposed for UWSN. One such algorithm is Low-Energy Adaptive Clustering Hierarchy (LEACH), LEACH-C [24], which is an enhancement of the LEACH algorithm. The focus of LEACH-C is selection of CH. This process is based on simulated annealing mechanisms that provide an optimal solution for selection of CH with maximum energy levels. Similarly, in [25] the same mechanism, in two rounds, is used for selection of a secondary cluster head, which becomes the cluster head when the energy level of the cluster head falls below a certain threshold. LEACH-SAGA [26] is another enhancement of the LEACH algorithm, which is based on a simulated annealing and genetic algorithm. In LEACH-SAGA the clusters are formed by genetic algorithms and simulated annealing. Then, based on distance from cluster center and residual energy, the cluster head is chosen. Application-Specific Low Power Routing (ASLPR) [27] is yet another routing algorithm that employs simulated annealing techniques for enhancing the lifetime of a cluster head. ASLPR simply selects a cluster head with a high energy value for a specific time period, and after time expires, a new cluster head is selected, again based on the available energy value, to prolong the life of the nodes.
Particle swarm optimization (PSO) based techniques are popular for optimization in a variety of computing problems. Similarly, PSO based techniques are available for routing protocols of WSN. One such technique is discussed in [28], which is a two stage approach where in the first stage the cluster head is selected on the basis of available energy value as well as network coverage of the node, whereas in the second stage a routing tree is created that connects the cluster heads with the sink node. Another PSO based protocol is presented in [29]. In this approach, an energy conservation-based routing protocol is proposed, which selects the gateway and relay nodes that have the minimum distance from the sink nodes. Moreover, a fitness function is also proposed for load balancing in the network. Particle Swarm Optimization Reactive (PSOR) [30] is yet another PSO based protocol where optimum path is computed on the basis of energy required to reach the sink node. Particle Swarm Optimization-Energy-Efficient Cluster Head Optimization (PSO-ECHS) [31] is a scheme focusing on the efficient selection of a cluster head. For this selection, PSO-ECHS considers residual energy, distance among the nodes in a cluster, and the distance from a sink node. Similarly, clusters are formed using a weigh function with the same parameters.
Ant colony optimization (ACO) is closely related to PSO and is yet another method for optimization. ACO based routing algorithms are popular in various networks, including WSN. ACOPSO [32] is a hybrid routing protocol that employs both ACO and PSO. In this scheme ACO is used for path selection by constructing a shortest path spanning tree. This is followed by PSO based path selection, where inputs are the outputs produced by the ACO. LATWSN [33] is a routing protocol developed by the ACO technique. LATWSN tries to minimize the energy consumption by selecting the node in the neighbor list that is nearest to the destination. LEACH-P [34] is an ACO based extension of LEACH protocol where the probability of energy utilization is calculated while selecting the next hop neighbor towards the sink node. IACAEO [35] is another ACO based protocol, which works by decreasing the pheromone value of a node if the energy level falls beyond a certain threshold. The same information is communicated to its neighbor nodes for non-selection of the node in future routing decisions. Just like ACO and PSO, another popular machine learning technique, neural networks (NN), is also used in WSN for development of an efficient routing algorithm. One such technique is discussed in [36] where a Hopfield NN is used to enhance the working of link state routing protocol. The weighted matrix for initialization of Nneurons and selection of the next node for routing, in this scheme, uses number of hops, load, delay, and bandwidth. SIR [36] is another NN based routing protocol, which is a modification of the classic Dijkstra algorithm for finding optimal routes from a sink to every node. The same algorithm also provides QoS and is used for recording the reading of utility meters. Reference [37] describes another technique where a three stage NN is used to select the cluster head. This is followed by a traditional routing mechanism where the distance between the sink node and NN based cluster head is calculated for selection of an optimal route. These routing protocols categorized in Figure 1.
Moth Flame Optimizer
Moths are insects that resemble butterflies. To date, about 160,000 species of moths have been identified. Moths are born as larvae, which convert to adult moths in cocoons. The fascinating aspect of moths is the way they navigate at night time. They always travel in the night using moon light. Their method of travelling is known as traverse orientation. In this method, moths always travel towards moon light having a fixed angle towards the moon. This is an effective method for traveling in a straight line. Similar methods can also be used by humans [38][39][40]. Suppose if a human wants to go towards the east and the moon is in the southern side of sky, the person can travel in a straight line if the person keeps the moon to his/her left side while moving. Despite the efficiency of transverse orientation, moths are deceived by artificial light and tend to fly spirally towards artificial light. The reason for this behavior is that transverse orientation works well only if the light source is far away. However, when an artificial light is encountered, moths try to maintain a straight angle towards it. However, since the artificial light is too close, it results in a deadly path for moths. At the same time, it can be said that moths always converge towards light. This is a very useful property, which is exploited mathematically, resulting in the moth flame optimizer (MFO) algorithm introduced in [41]. The future research directions have been discussed in [42,43]. Similarly, future research avenues regarding mobile anchor node assisted localization for WSN are presented in [44]. Localization algorithms of UWSNs are summarized in [45,46].
MFO algorithm moths are treated as candidate solutions where the location of moths in space is considered as a variable for the problem. The moths can fly in either one or two dimensions or in any multi-dimensional space while changing their position. MFO is a population-based algorithm and an m × n moth matrix can be generated from their set of positions, where m is the number of moths and n is the number of variables. A moth array can be created that will store the consequent fitness value of all the moths. This fitness value is the returning value of the fitness function of each moth. Similarly, two matrices are created for flames: the first matrix is like a moth matrix, whereas the other one is like a moth array. Both the matrices have the same degree as the matrices developed for moths. It is important to note that both moth and flame matrices are solutions with the difference being in the way that both are updated in each iteration. Moths are agents moving in the search space, whereas the flame provides the best position that moths have attained yet. We can deduce that the flames are the pins that are dropped by the moths while moving in the search space. Therefore, with each search operation the moth flies around a flame and will update it if it finds a better position. The moth will always find the best solution by adopting this mechanism [39][40][41]. Figure 2 explains the SOSNET algorithm. During the initialization phase, each moth has a random position in an m × n dimensional solution space, whereas fitness values are stored in a moth array. Similarly, a flame matrix and its corresponding array is generated. The flame matrix is used to store the best value of the moth found so far. The moths are moved in the solution space until an optimal solution is found, afterwards the search operation is terminated. This is followed by updating the moth position. The same operation is repeated until the optimal number of flames and correspondingly optimal position of each moth against its flame is obtained.
SOSNET-Proposed Methodology
CAMONET [38] is an MFO based algorithm for clustering applied in vehicular ad hoc networks (VANET). We believe that the same solution can also be extended to UWSN. CAMONET basically finds the optimal number of clusters required in a given network. Due to its evolutionary capabilities, it finds the best solution for both continuous as well as discrete variable problems. Another promising aspect of this solution is that it is a computationally inexpensive operation that makes it a suitable candidate for UWSN. In [38] we have implemented CAMONET in a VANET environment.
In the current scenario, SOSNET, we create a grid of n × n number of sensors in a geographical area. All the sensors are given a suitable ID in a mesh type network. These nodes are considered as moths. A matrix, based on Euclidean distance, is generated that provides distances of all the nodes. These moths are the basis on which we create our search space. Certain parameters are used for forming this search space: these include dimensions, and lower and upper bounds. This is followed by checking the fitness of moths by using their locations in the search space. This is an iterative process that results in the creation of a fitness matrix; after each iteration, the generated values are stored in this matrix, which are arranged in ascending order. Thus, this matrix provides the lower fitness value of moths. By combining moth position and fitness value we obtain the best score of the flame. This is used for updating the moth position. A linear decreasing factor 'x' is used to converge this to an optimal solution. Through this same convergence we can get the optimal number of clusters required for effective communication based on given parameters. The pseudo code of SOSNET in presented in Algorithm 1.
Once the clusters are formed, the next phase is to select the cluster head (CH). Selection of the CH depends on many parameters, such as transmission range, residual energy of node, node density, grid size, and load balance factor. Weights have been assigned to all these parameters in the fitness function. This selection is carried out by using a fitness function, which is an important part of our algorithm. Selection of the best CH will increase cluster life time and will ultimately result in saving network energy. SOSNET calculates the fitness value by using the following Equation (1): In the above equation, Energy Resi is the residual energy of the node, whereas avg dis is the average distance towards neighbor nodes and delta diff is the difference. This delta diff is used as load balancing factor (LBF). W 1 is the assigned weight for energy whereas W 2 is the weight for average distance and W 3 is the weight assigned to delta difference. In an ideal scenario, all the clusters should have an equal number of members. However, this is difficult to achieve in a real-world scenario due to changes in sensor positions as a result of ocean currents and other factors. Delta difference is used to calculate the deviation from an ideal degree to a node's movement from its neighbors and is calculated by following Equation (2): (2) Recent research has shown that if the criteria of selection of CHs is static, then there are high chances that a single parameter may bias the fitness function and thus result in selection of an inappropriate CH [21][22][23]38]. To counter this problem, SOSNET uses dynamic assignment of weights to its parameters, based on negative impact on the fitness function, depending on the scenario. It first normalizes the value of each parameter in the range from 0 to 10. Then the deviation, which shows the negative impact of the parameter, is calculated for each parameter from its mean by using the following Equation (3): Dev (p) = ABS (mean − parameter (p)).
( 3) In addition to the above-mentioned equation, another equation is used to penalize outlier parameters, and some penalty is added depending upon the parameter deviation from its mean value. The following Equation (4) is used for this purpose and it computes value for each parameter: The sum of all the weights must be equal to "1". The fitness value for each node is calculated by using the values and their parameters through Equation (1).
Experimental Setup
The experimentation was carried out using MATLAB version 2018a (The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760, USA), and tested on a corei5, 7th generation system with 8 GB of RAM. Various experiments were carried out using different values of grid size ranging from 500 to 2000 m. Similarly, the number of nodes used in experiments ranged from 20 to 200. The transmission range for each node was also varied from 25 to 200 m. It was assumed that nodes remained in fixed positions or moved very slowly in the presence of water currents.
SOSNET was compared with other state of the art evolutionary clustering protocols: ACO, gray wolf optimization (GWO), and comprehensive learning particle swarm optimization (CLPSO). Table 1 provides the parameters used for simulation.
Results and Discussion
For measuring the performance of each algorithm, fifty-six simulations were performed, and their results are depicted in Figures 3-7. Two basic parameters, transmission range of nodes and node density, were evaluated with different values for checking the effectiveness of SOSNET. The results show the flexibility and superiority of SOSNET.
In Figure 3a-e nodes' transmission range was set in the range of 25 m to 200 m. The grid size was set at 500 m × 500 m and number of nodes in the area was set in the range of 20 to 200 nodes. Figure 3a shows node density of forty nodes with the transmission range set to 25 meters. SOSNET generated only 18 clusters, whereas ACO created 28 clusters, CLPSO 34, and GWO created thirty-three clusters. When the transmission range was increased to 200 meters, SOSNET generated only two clusters in comparison to 3, 9, and 7 for ACO, CLPSO, and GWO, respectively. This is consistent with well-established studies showing that few numbers of clusters will be formed if the transmission range of the nodes are increased. Even when the transmission range was between 25 m and 200 m, SOSNET outperformed all the other algorithms and the results are evident from the figure. Similar kinds of results can be seen when the number of nodes was increased to 40, 80, 120, 160, and 200. This is shown in Figure 3b-e. By viewing the results, it is obvious that SOSNET outperformed all the other algorithms. It created the minimum number of clusters required to work in the given grid size.
Grid Size vs. Transmission Range vs. Number of Clusters
To make matters more concrete another set of experiments was carried out to check the performance of the proposed algorithm. In these experiments we varied the transmission range of the nodes from 50 m to 200 m while keeping the grid size static at 500 m × 500 m and node density was taken as 200 nodes. The results of these experiments are shown in Figure 4. It is evident that even with the varied transmission range SOSNET performed equally well and in all instances gave the best results, whereas ACO was the closest one in these settings.
Even with the grid size increased to 1 km × 1 km SOSNET outperformed all the other algorithms. The results of this scenario are depicted in Figure 5. In this setting, when the node density was set at eighty nodes, SOSNET created 9 clusters. On the other hand, ACO created 13, CLPSO 19, and GWO 18 number of clusters. Clearly, SOSNET was providing a smaller number of clusters. When the node density was increased to 200, ACO created 9 clusters, GWO 19, and CLPSO created 24 clusters. On the contrary SOSNET created only six clusters, which showed that with this grid size, at any number of node density, SOSNET created a smaller number of clusters, thus optimizing the routing problem. It can also be seen that the transmission range had a direct impact on the number of clusters. This is consistent with other studies that show that more clusters are formed if the transmission range is low, since there tend to be fewer nodes in a cluster. If the range is increased, the nodes will find farther nodes as well, resulting in a fewer number of clusters. To further analyze the performance of SOSNET, the algorithm was experimented with different transmission ranges of the nodes while keeping the grid size and node density at a fixed value, i.e., 1 km × 1 km and 200 nodes, respectively. The results, as shown in Figure 6, indicate that SOSNET was a perfect solution in the given situation and provided optimal results. Simulations were also carried out while increasing the grid size to 1500 m × 1500 m. The results of this setting are shown in Figure 7. It can be observed from Figure 6a that with lower transmission range all the algorithms except SOSNET provided similar results, whereas SOSNET provided optimal results, which shows the efficiency of the algorithm. It can also be observed that with the increasing number of nodes the performance of SOSNET also improved. For example, with the node density set to 200, SOSNET generated 9 clusters, whereas ACO was the second best with 14 clusters, GWO created 17 clusters, and CLPSO generated 21 clusters.
As with the previous two scenarios, SOSNET was again evaluated with the varied transmission range from 50 m to 200 m and keeping the grid size at 1500 m × 1500 m. The results are depicted in Figure 8. It is interesting to note that there was a huge performance difference between SOSNET and the other algorithms when the transmission range of the nodes was set to 200 m in the given setting. This shows that with the increasing transmission range the performance of SOSNET improved further and it created very few numbers of clusters. Similar kinds of results were observed when the grid size was enhanced to 2 km × 2 km. The results of this grid size are shown in Figure 9. In this setting, with the transmission range set to 100 m and node density set at 160 nodes, both CLPSO and GWO performed poorly, creating 27 and 22 clusters, respectively, whereas ACO created 13 clusters. However, SOSNET performed excellently by generating only nine clusters. In the extreme situation where transmission range was considered as 200 m and node density was set to 200, SOSNET generated 11 clusters, whereas the closest one, i.e., ACO, generated 16 clusters. The same results were observed when various parameters were changed, such as grid size, transmission range, and number of nodes. This proves that SOSNET was the best solution and provided the optimal number of clusters in any given setting.
The same results were also observed with the transmission range set to 50, 100, 150, and 200 m while keeping other settings constant at 200 nodes and 2 km × 2km. It can be clearly observed from Figure 10 that SOSNET's performance was best among these protocols. This shows that SOSNET gave the best results in any given scenario. In order to evaluate SOSNET in a more realistic real-world scenario, a final set of simulations was also run on a 3D grid with a transmission range of 50 m to 200 m and a grid size of 500 m × 2000 m with the node density set to 40 and 80 nodes. The results are depicted in Figure 11. It shows that even in this scenario, SOSNET performed better than the other algorithms.
Load Balance Factor
As mentioned above, it is not possible to have an equal number of nodes in each cluster. It is important to calculate the load on each CH. LBF is also calculated for each CH by following Equation (5): where n c is the number of clusters, x i is the cluster's cardinality, and -u is the average number of neighbors of a CH. Figure 12 shows the LBF calculated for a grid size of 1000 m × 1000 m; the transmission range is taken from 25 m and the node density ranges from 20 to 200 nodes. It can be observed that SOSNET performed well when the neighbors' numbers reached the threshold value in terms of load balancing in the network. By looking at Figure 12, some general observations can be made. When the transmission range was increased, a fewer number of clusters were formed. This implies that the number of clusters is inversely proportional to the transmission range. On the other hand, an increase in grid size resulted in a higher number of clusters, effectively showing that both these were proportional to each other. Another observation can be made while observing the distance between nodes and grid size. The distance between the nodes and grid size were directly proportional to each other.
The above discussion and results prove that SOSNET provides optimal result in UWSN and make it a promising candidate of routing algorithms in this scenario. SOSNET works efficiently, not only in the presence of dense traffic, but it also performs equally well in large network sizes and in any transmission range, which makes it an ideal choice for a routing algorithm that can be used in UWSNs.
Conclusions
In the current research work, we have proposed an efficient algorithm SOSNET for UWSN. SOSNET is a scalable and efficient protocol that utilizes the MFO technique to work in a search space and selects the optimal number of clusters for routing. SOSNET is an evolutionary algorithm that works iteratively in a given search space. As the total number of clusters required are decreased, the routing cost of the packets is also reduced. It provides a near optimal number of clusters required for routing, and this ultimately results in decreasing the routing cost as well as the conservation of energy in the nodes.
To check the effectiveness of the proposed algorithm, different simulations were executed. The algorithm was tested with various levels of node density; similarly, the algorithm was tested by varying the transmission range of sensor nodes. The results prove that SOSNET is an ideal solution that can be adopted for routing in the said networks. SOSNET is compared with other well-known evolutionary algorithms, i.e., GWO, CLPSO, and ACO, and the results show the superiority of the SOSNET. | 7,586.6 | 2019-03-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Searches for physics beyond the Standard Model at KLOE
The KLOE detector, operating at the DA NE -factory facility of INFN Frascati, has contributed in several ways to the confirmation of the Standard Model, while searching at the same time for direct or indirect hints for new physics signals. Among those, I will discuss briefly the long-standing effort for the precise determination of the hadronic contribution to the muon magnetic anomaly, and the searches for a new, light, neutral vector boson.
Introduction
Built in the late 90's, KLOE is a multi purpose apparatus designed to optimise the detection efficiency for decays of K 0 L produced at the DA NE -factory of INFN Frascati.Between years 2000 and 2006, DA NE has delivered to KLOE 2.5 fb −1 of e + e − collision data at the (1020) peak, plus additional 240 pb −1 at 1000 MeV.
Using these data KLOE has taken part to the recent years effort of confirmation and consolidation of the Standard Model, while searching at the same time for direct or indirect hints for its breakdown.Among the main KLOE results one can list: • The complete set of neutral and charged kaon decay parameters to allow the precision measurement of V US , setting the best unitarity limit for the CKM matrix.• A precise determination of the hadronic contribution to the muon magnetic anomaly.
• The most detailed studies on the nature of scalar mesons.
• The measurement of some of the rarest branching ratios of the K 0 S and mesons.• Several precision tests of fundamental discrete symmetries conservation (C, CP, CPT) as well as precision test of quantum mechanics.
In the following I will briefly discuss a few of the above measurements, relevant for searches of physics beyond the Standard Model.
a e-mail<EMAIL_ADDRESS>is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Indirect searches
The hadronic contribution to the muon magnetic anomaly a cannot be calculated on the basis of first principles.It can be however evaluated by means of a dispersion integral over the e + e − cross section to hadrons.The latter is dominated by the contribution below ∼1 GeV.
At DA NE, this can be measured using + − final state events, where the energy of the radiated photon determines the effective q 2 of the hadronic system.KLOE has contributed to this determination using different data sets and different data selection techniques: • Using 140 pb −1 of data collected in 2001, selecting events with the photon emitted at small polar angles [1] • Using 240 pb −1 of data collected in 2002, same "small angle" selection [2] • Using 230 pb −1 of data collected in 2006 with c.m. energy of 1000 MeV, where the photon is now selected at large angle [3] • Using 240 pb −1 of data of 2002, where, differently from all of the previous analysis, the sample is normalized with respect to the + − cross section [4].
The results obtained by these statistically independent analysis are in good agreement between each other and confirm the ∼3.5 deviation between the calculated value of a and the measured one.This is the largest discrepancy registered so far between a measured quantity and Standard Model predictions.The nature of this potential new physics signal is unsofar a mistery.
Direct searches
One of the possible ways to account for the a puzzle mentioned in the previous section, is to postulate the existence of a new light neutral boson (known as U boson, dark photon, A' or ') very weakly coupled to ordinary matter.Interestingly, this hypothesis might help interpreting also several recent astrophysical observations which fail an explanation in terms ok know astrophysical sources (see, for instance [5][6][7]).
KLOE has contributed to the search for such new particles using different categories of events: • The process → e + e − • Continuum events with a radiative photon, in particular + − final states • Continuum events with U boson production in pair with a new light neutral scalar (a "higgs-like" particle).
The first transition is expected to occurr with a rate suppressed by a factor 2 with respect to the standard Dalitz decay process, being the effective coupling of the U boson to ordinary matter normalized to the electromagnetic .However, the electron-positron pair invariant mass must resonate at M U , peaking over the non-resonant background distribution.The presence of an meson is tagged by its 3 pion decays, which accounts for more than 50% of its branching ratios.One dangerous instrumental background comes from photon conversions over the detector material, which is removed by properly developed software algorithms.In the two papers [8,9], KLOE did not find evidence for peaks, therefore the following limits could have been set, both at 90% CL: An analysis of the + − final state is also being finalised.The data set is the same of the hadronic cross section paper [4], for which a dedicated algorithm for muon identification was developed.It is worth mentioning that in our case / separation is highly untrivial, since at DA NE energies, muons are not penetrating particles.Again, the signal should appear as a sharp peak in the dimuon invariant 07012-p.2
MENU 2013
Figure 1.Exclusion plot for U boson production set by KLOE (blue regions), using Dalitz decays (left region) and + − events (right region).The plot shows also limits set by previous experiments, for comparison, as well as the favoured region of the parameter space derived by the observed a value.mass distribution, over the continuum QED background.Since no peak is observed a preliminary limit on U boson production could be set, as shown in figure 1.Note that the distribution is sharply cut at M U ∼ 500 MeV, because of the photon angular selection.
As explained before, the U is the mediator of a new, hidden symmetry which may be spontaneously broken by some higgs-like mechanism.In this case, as in the Standard Model, a new scalar particle must exist, the h'.As for the SM Higgs, the mass of the h' is not predictable by first principles.From the phenomenological point of view this has important consequences.In fact, if M h > M U , it will decay dominantly into a pair of U bosons (either both real or real-virtual), thus giving rise to events of the type e + e − → Uh → 6l ± , where l ± stands for all possible combinations of e ± , ± , ± pairs.In the opposite case, it will decay primarily through a loop-induced process, which makes it become long-lived, thus producing e + e − → Uh → l ± + missing energy event.
The first type of events was studied by BaBar [10].The second one is instead being searched for by KLOE, using two different data sets: the first one, the most copious, consists of about 1.7 fb −1 acquired at the (1020) resonance peak.The second, of about 240 pb −1 , was taken at √ s = 1000 MeV, where the nasty background contamination due to K ± decays is strongly suppressed.In none of the two samples evidence for a signal was observed, allowing KLOE to set preliminary limits on D 2 < 10 −8 − 10 −9 for 200 MeV < M U < 900 MeV, 50 < m h < 450 MeV ( D is the coupling constant of the new "dark" interaction).A final paper is expected to come quite soon.
Further analysis are also been developed.In particular, the e + e − final state is relevant, since it allows exploring U boson masses down to ∼1 MeV.Also in this case, results are expected quite soon. | 1,806 | 2014-06-01T00:00:00.000 | [
"Physics"
] |
Steady Natural Convection of Non-Newtonian Power-Law Fluid in a Trapezoidal Enclosure
Numerical investigation of free convection heat transfer in a differentially heated trapezoidal cavity filled with non-Newtonian Power-law fluid has been performed in this study. The left inclined surface is uniformly heated whereas the right inclined surface is maintained as uniformly cooled. The top and bottom surfaces are kept adiabatic with initially quiescent fluid inside the enclosure. Finite-volume-based commercial software FLUENT 14.5 is used to solve the governing equations. Dependency of various flow parameters of fluid flow and heat transfer is analyzed including Rayleigh number (Ra) ranging from 105 to 107, Prandtl number (Pr) from 100 to 10,000, and power-law index (n) from 0.6 to 1.4. Outcomes have been reported in terms of isotherms, streamlines, and local Nusselt number for various Ra, Pr, n, and inclined angles. Grid sensitivity analysis is performed and numerically obtained results have been compared with those results available in the literature and were in good agreement.
Introduction
Rectangular enclosures with differentially heated vertical sidewalls are of great importance to many fields of studies in heat transfer phenomena such as natural convection. It is one of the most widely investigated configurations because of its prime importance as a benchmark geometry to study convection effects and compare numerical techniques. Additionally, the geometry has many applications in different industrial techniques and equipments such as solar collectors, food preservation, compact heat exchangers, and electronic cooling systems among other practical applications. As a consequence of these applications, a thorough literature exists in this field of study especially in the case of Newtonian fluids [1][2][3][4].
Natural convection laminar flow of non-Newtonian Power-law fluids performs an important role in various engineering applications which are related to pseudoplastic fluids. It should be noted that the pseudoplastic fluid is characterized by apparent viscosity or that consistency decreases instantaneously with an increase in shear rate. The study of fluid flow and heat transfer related to Power-law non-Newtonian fluids has attracted many researchers in the past half-century. An excellent research on pseudoplastic fluid was conducted by Boger [5]. At first, boundary-layer flows for such non-Newtonian fluids were investigated by Acrivos [6]. Since then, a large number of literatures are created due to their wide relevance to pseudoplastic fluids like chemicals, foods, polymers, molten plastics, and petroleum production and various natural phenomena.
It is important to be noted that most of fluids employed in chemical and petrochemical processes or many other industries seems to show non-Newtonian behavior. The natural convection of a non-Newtonian fluid over enclosures such as a cylindrical enclosure or a heated plate has received more attention [7][8][9][10][11][12][13][14]. Several methodologies including analytical [7], numerical [8], and experimental [9] approaches have been employed in most of these studies, and the results indicated that the free convection features are considerably affected by the rheological properties of the fluid. However, the crucial issue of the buoyant convective process in various other geometries/enclosures of a non-Newtonian fluid has remained largely unexplored.
Kim et al. [15] studied unsteady buoyant convection of a non-Newtonian Power-law fluid within a square enclosure. The authors used the finite volume technique realizing that the rheological properties have a considerable effect on the transient process. Additionally, the numerical solutions had an extensive qualitative agreement with the descriptions obtained from the scale analysis. Following their study, steady-state analysis is performed in this study for a trapezoidal configuration of Non-Newtonian fluid. We have performed parametric studies by varying angle of the inclined surfaces, Rayleigh number, Prandtl number, and Power-law index.
Mathematical Formulation
Consider a two-dimensional trapezoidal enclosure of length (base) and height , which is filled with an incompressible Power-law non-Newtonian fluid. Figure 1 displays the enclosure with top and bottom insulated walls. The left inclined wall is heated and the right inclined wall is cooled with constant temperature. With invocation of Boussinesq's approximation, governing equations take the form as below: where ( , V) represent velocity components in the horizontal and vertical directions; represents the temperature; represents the pressure; represents the gravitational acceleration; and , , and represent the density, thermal expansion coefficient, and thermal diffusivity of the fluid at reference temperature 0 . The related boundary conditions are Dimensionless forms of (1)-(4) can be obtained in the following fashion: The crucial part of the formulation is to assign a suitable fundamental equation, which relates definite components of stress tensor to the relevant kinematics variables. For this purpose, a purely viscous Power-law non-Newtonian fluid is assumed, which follows the Ostwald-De Waele Power-law [7-9]: In the above, two material parameters are involved, that is, , the consistency factor and , the Power-law index, and represents the rate of deformation tensor. Apparently, = 1 corresponds to those fluids of Newtonian behavior with the coefficient of viscosity , whereas > 1 indicates the dilatant (or shear thickening) behavior and < 1 shows pseudoplastic (or shear thinning) behavior of a non-Newtonian fluid. The pseudoplastic fluids have generally a high viscosity, and thermal variation of viscosity has also a direct effect on the thermal and flow fields. In the present setup, the dependency of on temperature is not assumed; a small temperature difference, Δ , is assumed.
is simplified to the following equation for the twodimensional Cartesian coordinates: From (7) and (8), we get (9) for apparent viscosity [11]: Advances in Mechanical Engineering 3 Obviously, for = 1, represents the conventional viscosity. However, for nonunit , non-Newtonian behavior, complex dependence of viscosity on fluid's property, and velocity components gradients are diagnosed. Based on the physical rationalizations and trial-and-error efforts, a grouping, which consists of the consistency coefficient , the Power-law index , the fluid density 0 , and the cavity height , emerges to be appropriate [9]: It is important to be noted that application of ] , which is in dimension of m 2 s −1 , is analogous to that of kinematics viscosity of Newtonian fluids. Using (10), Prandtl number and Rayleigh number are defined, respectively, as below [12]: It is of great interest for many researchers to investigate local Nu of hot wall in many thermal systems. Similarly, local Nu is studied for left hot inclined wall which is defined as follows: where denotes the normal direction on left-side plane.
Numerical Procedure
Finite-volume-based code is used to discretize and solve the coupled set of equations (1)-(4) employing commercial software Ansys FLUENT 14.5. In this framework, QUICK scheme was used for convective terms and SIMPLE algorithm was employed for the coupling of the pressure and velocity. Convergence criteria were set to 10 −5 for all relative residuals. A grid of 81 × 81 has been required for obtaining acceptable results, as shown in Table 1; a refinement to 101 × 101 leads to a maximum difference of 2.04% and 0.35% in terms of maximum stream function ( Max ) and average (Nu avg ) for Pr = 100 of a square enclosure. As an additional check of the results' accuracy, the present solution has been validated against the Benchmark solutions obtained, in the case of the classical Newtonian fluids and non-Newtonian fluids in a square enclosure. Nusselt numbers of some certain cases are compared in Table 2.
Results and Discussion
In this section, the results correspond to the influence of important parameters, namely, inclination angle (0 ≤ ≤ 60), Power-law index (0.6 ≤ ≤ 1.4), Rayleigh number (10 4 ≤ Ra ≤ 10 6 ), and Prandtl number (100 ≤ Pr ≤ 10,000), on heat transfer and fluid flow. The results are presented in the form of local Nusselt number, isotherm, and stream function for the Figure 2 illustrates isotherms and streamlines of various angles for trapezoidal enclosure of Ra = 10 5 , Pr = 100, and = 1. As expected due to presence of hot and cold walls, fluid rises up from bottom horizontal edge, adjacent to the hot inclined wall and flows up along it reaching the top horizontal edge. Then, the fluid flows down beside the oblique cold wall forming a roll with clockwise rotation inside the cavity. By the increase of angle, horizontal isotherms occupy much area of the enclosure. Also the formed roll is elongated toward the side walls by the increment of trapezoidal angle. Figure 3 indicates local Nu of hot wall for three angles of Pr = 100, Ra = 10 5 , and = 1. For square enclosure ( = 0), local Nu has a maximum value of nearly 14 at the top end of hot inclined side wall. By tilting the angle to 30 ∘ , maximum Nu is reduced to almost 8 and its position is near top end again. By further increase in angle value ( = 60), Nu value is reduced more and many positions may be regarded to have maximum Nu of nearly 2. It is concluded that by the increment of trapezoidal angle, average Nu is reduced and this may be attributed to the increase of mean distance between two differentially heated inclined side walls. Figure 4 displays isotherms and streamlines of distinct Power-law index , from 0.6 to 1.4 for Pr = 100, Ra = 10 5 , and = 30. When shear thinning behavior is converted to the shear thickening behavior by the increment of Power-law index, maximum stream function is reduced from nearly 0.3 kg/s to 9.3 × 10 −6 kg/s. This reveals that for > 1, fluid gradually rises up to the top edge and we would expect lower Nu for > 1 with respect to those < 1. Isotherm lines show that the intrusion of fluid at top and bottom edges is thickened by the increment of Power-law index. This fact shows that much part of fluid inside the enclosure is expressed for the case of higher . Local Nu is displayed in Figure 5 for different of Pr = 100, Ra = 10 6 , and = 30. Shear thinning fluid has a larger value of maximum Nu than that of shear thickening fluid. This maximum value is located near top edge of sloped hot wall for all . Figure 6 represents isotherms and stream functions of different Ra for Pr = 100, = 1, and = 30. Stream lines reveal that for Ra = 10 4 a clockwise roll is formed within the enclosure and with the increase of Ra; this roll is extruded and elon-gated toward the side walls generating two small rolls. Also maximum stream function is increased from 0.0022 kg/s for Ra = 10 4 to 0.01 kg/s for Ra = 10 6 . It is clear that larger value of Ra results in higher Nu due to the higher rate of heat transfer from hot wall to the cold wall and Advances in Mechanical Engineering also investigated. Note that the values of Pr are much larger than unity for non-Newtonian fluids and it has been shown that an increase of this parameter makes the contribution of convective terms in (4) negligible [15], but to have better insight into the fluid flow, we present numerical results of distinct Pr. Figure 8 displays isotherms and streamlines of fluid for different Pr of Ra = 10 5 , = 1, and = 30. There exists negligible difference of isotherms and stream functions. Stream function for Pr = 100 is reduced from 0.0052 kg/s to 5.2 × 10 −5 kg/s for Pr = 10,000. Note that Nu never changes as the Power-law index is unity for different Pr [15].
Conclusions
A numerical study has been performed on steady natural convection of non-Newtonian fluids within a trapezoidal cavity with differentially heated walls. The main objective of the present work was to observe the influence of parameters, namely, Power-law index, trapezoidal angle, and Pr and Ra in terms of isotherms, streamlines, and local Nusselt number. Main outcomes of the study are as follows.
(i) By the increase of trapezoidal angle, the formed roll within the enclosure is elongated and extruded toward the side walls. Additionally maximum Nu on left hot wall is reduced by the increase of trapezoidal angle. (ii) Shear thinning behavior of the working fluid has higher Nu value than that of shear thickening. This may be attributed to the lower maximum stream function of higher Pr. (iii) Increment of Ra enhances local Nu and makes the generated roll at the core of enclosure extruded to the side walls. (iv) Pr variation has not significant effect on Nu as most non-Newtonian fluids contain higher values of Pr. Also maximum stream function is reduced by the increase of Pr. | 2,912 | 2013-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Bioinformatic Tools Identify Chromosome-Specific DNA Probes and Facilitate Risk Assessment by Detecting Aneusomies in Extra-embryonic Tissues
Despite their non-diseased nature, healthy human tissues may show a surprisingly large fraction of aneusomic or aneuploid cells. We have shown previously that hybridization of three to six non-isotopically labeled, chromosome-specific DNA probes reveals different proportions of aneuploid cells in individual compartments of the human placenta and the uterine wall. Using fluorescence in situ hybridization, we found that human invasive cytotrophoblasts isolated from anchoring villi or the uterine wall had gained individual chromosomes. Chromosome losses in placental or uterine tissues, on the other hand, were detected infrequently. A more thorough numerical analysis of all possible aneusomies occurring in these tissues and the investigation of their spatial as well as temporal distribution would further our understanding of the underlying biology, but it is hampered by the high cost of and limited access to DNA probes. Furthermore, multiplexing assays are difficult to set up with commercially available probes due to limited choices of probe labels. Many laboratories therefore attempt to develop their own DNA probe sets, often duplicating cloning and screening efforts underway elsewhere. In this review, we discuss the conventional approaches to the preparation of chromosome-specific DNA probes followed by a description of our approach using state-of-the-art bioinformatics and molecular biology tools for probe identification and manufacture. Novel probes that target gonosomes as well as two autosomes are presented as examples of rapid and inexpensive preparation of highly specific DNA probes for applications in placenta research and perinatal diagnostics.
INTRODUCTION
The human placenta is a vital organ anchoring the fetus to the mother via the uterus and providing an interface for the transport of nutrients, gases and waste. The overwhelming number of chromosomal studies of the placenta has been performed on cells biopsied from floating villi, which were cultured for several days to obtain metaphase spreads for conventional chromosome banding analysis. We decided to perform investigations on uncultured interphase cells using fluorescence in situ hybridization (FISH), since cell viability or proliferation are minor concerns when using FISH [1][2][3][4][5]. Probes for our initial studies of aneuploidy in extraembryonic tissues were obtained from a commercial source (Abbott, Des Moines, IL) [6,7]. Probe sets were comprised of three to four chromosome enumerator probes (CEPs) targeting chromosome types, X, Y, 16 or 18, or locus-specific probes (LSPs) for chromosome 13 or 21 [7]. anchoring villi and the uterine wall also referred to as 'basal plate', we found that the karyotypes of these extraembryonic cells were mostly unrelated to the karyotype of the fetus [5,7,8]. The most common abnormality we have observed was a gestational age-related gain of chromosomes affecting invading cytotrophoblasts (iCTB's) [7]. For a more comprehensive analysis and to be able to increase the number of chromosome types that can be scored simultaneously in a single FISH experiment, we had to develop our own custom sets of chromosome-specific DNA probes.
While the DNA probe development efforts described in the present communication were prompted by the need to develop a novel probe set for more comprehensive cytogenetic analyses of normal placental tissue compartments from uncomplicated pregnancies [6], DNA probes selected in a similar fashion are likely to find widespread application in investigations of unusual conditions such as spontaneous abortions [9, 10] or confined placental mosaicism (CPM) [11][12][13][14], the cytogenetic analysis of human preimplantation embryos [15][16][17][18][19][20][21], perinatal analysis [22][23][24], tumor research and diagnosis [1-5, [25][26][27] as well as radiobiological or environmental studies [28][29][30][31][32][33][34][35][36][37][38][39][40]. Thus, the description of our probe selection approach combining bioinformatics tools for data mining of genomic databases with deeply redundant recom-binant DNA clone libraries, which follows the brief review of the more conventional techniques for DNA probe selection, may provide useful information for a diverse group of researchers in the life sciences and enable the average research lab to prepare chromosome-specific custom DNA probes at a very affordable cost.
Selection of DNA Target Sequences and Preparation of Non-Isotopically Labeled DNA Probes for FISH
Briefly, successful cytogenetic analysis by FISH is based on the formation of stable hybrids between the DNA targets inside cell nuclei or metaphase chromosomes and the labeled DNA probes molecules provided by the investigator [41]. The DNA probes can either be marked by a fluorochrome, which can then be detected by eye or a camera attached to a fluorescence microscope, or by a non-fluorescent, nonisotopical hapten, most often biotin, digoxigenin or dinitrophenol, which is detected by a fluorescent moiety such as a fluorochrome-labeled avidin or antibody. Different probe types are available to suit particular applications: whole chromosome painting probes allow the delineation of interchromosomal translocations in metaphase spreads [37,42,43], while intra-chromosomal rearrangements are detected in metaphase or interphase cells with chromosome bandspecific probes [44][45][46][47]. In addition, there are DNA probes that target somewhat smaller, gene-or locus-specific regions [34,[48][49][50][51][52].
While the FISH technology found widespread application in research laboratories around the world, its acceptance in clinical settings is still hampered by a limited selection of commercially available, U.S. Food and Drug Administration (FDA)-approved tests and the typically labor-intensive, costly nature of producing DNA probes that perform well in multiplexed assays [53]. While FDA approval may be required for all diagnostic probes that are shipped across state borders in the U.S., the in-house preparation of DNA probes might lead to significant cost savings in research laboratories. Our laboratories have a long-standing track record of production of novel DNA probes and innovative cytogenetic assays, many of which have found their way into contemporary cancer research or preimplantation genetic diagnosis (PGD) analysis [16,43,45,47,48,50,[54][55][56][57][58][59][60]. To facilitate the distribution of molecular cytogenetic assays and make DNA probes as well as multiplex FISH tests available to the less experienced laboratory, we have undertaken probe production pilot studies which take advantage of the vast resources generated in the course of the Human Genome Project such as physical maps and recombinant DNA libraries.
Our initial studies focused on the preparation of novel DNA probes for chromosome scoring or 'enumeration' in interphase cell nuclei and metaphase spreads, since these seem to remain the most common applications in research and the clinical settings [53,61]. The vast majority of these CEPs target highly reiterated, tandemly-repeated DNA sequences in order to bind many copies of a rather small probe sequence to a tightly localized area or volume. Different ways of isolating and purifying such DNA probes exist [25,54,59,60,[62][63][64][65][66].
Briefly, up until the 1980's, satellite DNA sequences were enriched, isolated and characterized by a cumbersome, labor-intensive workflow which involved either density gra-dient centrifugation or timed reassociation of single stranded, thermally denatured DNA followed by enzymatic digestion of single stranded DNA by exonucleases. This was followed by molecular cloning, library screening, clone characterization and DNA sequencing which made this a rather costly enterprise [67][68][69]. The use of endonucleases to break up large tandemly repeated DNA clusters facilitated the hunt for chromosome-specific heterochromatic, satellite DNA, expedited the cloning-characterization steps and lead to major progress in the identification of chromosome-specific high order tandem repeats [62,[70][71][72][73][74].
The breakthrough in the isolation of chromosomespecific DNA polynucleotides and preparation of DNA probes for FISH came with the application of DNA amplification using the polymerase chain reaction (PCR) in the late 1980's: chromosome-specific sequences could be extracted on-line from larger, high order tandem repeats of satellite DNA to define the PCR primer sequences and amplify a specific fragment from genomic DNA [54] (Fig. 1A).
In a variation of this scheme, chromosome-specific sequences could be amplified with consensus PCR primers from template DNA which provided limited sequence variety, such as flow-sorted human or mouse chromosomes [25,75] (Fig. 1B). In general, DNA probes generated this way still represented a pool of diverse sequences and molecular cloning was required to isolate the highly specific, informative probes [25].
It wasn't until the completion of a first draft of the human genome sequence when new sets of genomic tools became available that would revolutionize the ways individual investigators analyze the human genome in the 1990's and onwards often using no more than their personal computer and an on-line connection to publicly available databases. Large insert, recombinant DNA libraries such as YAC [76,77], P1 [78,79] or BAC [66,80,81] libraries had been constructed and characterized, clones had been end-sequenced and placed on the larger physical maps by basic sequence alignment procedures [82].
The work of Baumgartner et al. (2006) [65] showed that a combination of database searches (to identify BAC clones rich in satellite content) in combination with in vitro DNA amplification can expedite the preparation of chromosomespecific DNA probes. However, this approach still requires some a priori knowledge of the target sequence to specify the PCR primers [65].
We recently demonstrated that publicly available online databases can be analyzed using a suite of simple bioinformatics tools to identify chromosome-specific BAC clones [60]. Specifically, we used our proprietary information of a Y chromosome-specific sequence [83][84][85] and a DNA sequence alignment program (BLAST) [82] to identify BAC clone RP11-243E13 as a potential DNA probe. Using the Genome Browser program at the UC Santa Cruz (UCSC) Genome Center web site (genome.ucsc.edu), we then identified a BAC clone mapped to the satellite containing centromeric heterochromatin on the human X chromosome (BAC RP11-294C12) [60]. Probes prepared from these two BAC clones showed an impressive betterthan-expected performance in FISH experiments by displaying strong, highly specific FISH signals localized exclusively to the target chromosomes (Fig. 2).
Probe Preparation and Fluorescence in Situ Hybridization (FISH) of BAC-derived DNA Probes
The procedures used for hybridization of BAC-derived DNA probes follow pretty much the published procedures for oligonucleotide, plasmid or P1-derived DNA probes [50,86,87]. In typical experiments, the BAC DNAs are extracted from overnight cultures following an alkaline lysis protocol [88] or using a BAC DNA miniprep kit (Zymo Research; Irvine, CA). The DNAs are confirmed on a 1% agarose gel and quantitated spectrophotometrically. Probe DNAs are labeled with biotin-14-dCTP or digoxigenin-11-dUTP (Roche; Indianapolis, IN) by random priming using a commercial kit (BioPrime Kit, Invitrogen; Carlsbad, CA). Slides of metaphase spreads of cells are made from shortterm cultures of peripheral blood lymphocytes from a karyotypically normal male following published procedures [35].
The slides (metaphase cells, interphase cell nuclei or slides carrying deparaffinized tissue section) are denatured in 70% formamide at 70 °C, dehydrated and overlaid with a hybridization cocktail containing 20-50 ng of denatured probe DNA in buffer containing 10% dextran sulfate and 50-55 % formamide. Following overnight incubation at 37°C (48 or more hours for deparaffinized tissue sections), slides are washed to remove excess probes and incubated with a fluorochrome-conjugated avidin or corresponding antibodies as required [59,66,89]. Finally, the slides are mounted with 4,6-diamino-2-phenylindole (DAPI) (0.1 g/ml) in antifade solution coverslipped and imaged on a fluorescence microscope.
BAC-Derived DNA Repeat Probes for Autosomal Targets
We were also interested in whether this concept of knowledge-based probe selection can be extended to probes for human autosomes. In our 2006 paper [65], we had proposed a satellite-rich BAC clone, RP11-469P16, as template for a PCR based probe generation scheme. The UCSC Human Genome Browser at genome.ucsc.edu indicates the presence of a long interspersed repeated DNA sequence (LINE) in the BAC insert, which may lead to undesirable cross-hybridization since LINEs are not chromosomespecific, but exist in thousands of copies across the human genome.
A B C
According to information provided on the UCSC Genome browser web site, a BAC insert typically consists of 25-350 kb of DNA. During the early phase of a sequencing project, it is common to sequence a single read (approximately 500 bases) at each end of each BAC from a large library. Later on in the project, these BAC end reads are mapped in silico to the genome draft sequence. Tracks in the genome browser as shown in Fig. 3 show these mappings in cases where both paired ends could be mapped within. A valid pair of BAC end sequences must be at least 25 kb but no more than 350 kb away from each other. The orientation of the first BAC end sequence must be "+" and the orientation of the second BAC end sequence must be "-". BAC end sequences are placed on the assembled sequence using Jim Kent's blat program [90]. Tracks can be used for determining which BAC contains a given gene or DNA repeat clusters using the 'RepeatMasker' program (www.repeatmasker.org). Please note that for the heterochromatic regions, there has been almost no clone validation in place to ensure that the predicted size or location of the BAC probe is correct.
When using a DNA probe prepared from BAC RP11-469P16, FISH results showed cross-hybridization to multiple chromosomes other than chromosome 2 (Fig. 4A, B). However, a DNA probe prepared from BAC clone RP11-100H17 (Fig. 3, arrow), which is expected to bind ~20 kb proximal of RP11-469P16 on the short arm of human chromosome 2, gave strong, highly specific FISH signals on interphase and metaphase cells (Fig. 4C). This can be attributed to the lack of interspersed non-chromosome specific DNA repeats in the insert of BAC RP11-100H17 as well as it's composition of DNA tandem repeat units of entirely chromosome 2-specific satellite DNA.
Since inserts of BAC clones that contain satellite DNA, but no short or long interspersed repeated DNA sequences (SINEs, LINEs) appear to render a high signal-to-noise ratio and strong chromosome specific signals which can easily be scored by eye using a microscope, we prepared a SINE-/LINE-free DNA probe for the short arm of chromosome 4, band p11. The BAC RP11-360M1 carries an insert of an estimated 59846 bp, which is rich in tandemlyrepeated satellite DNA repeats, but free of interspersed repeat DNA (Fig. 5).
In situ hybridization of the chromosome 4-specific DNA probe prepared from BAC RP11-360M1 in combination with a differently lebeled probe (BAC RP11-294C12) for the centromeric region of the X chromosome to deparaffinized human placental tissue section showed excellent probe performance, i.e., strong and highly specific DNA signals with were easily scored (Fig. 6).
CONCLUDING REMARKS
Molecular cytogenetic analyses using FISH have provided major contributions to our understanding of disease processes including tumorigenesis, cancer progression and metastasis, but also to the existence of aneuploid cell populations or cohorts in seemingly normal tissues [5, 61,[91][92][93][94][95][96].
For example, with an incidence of one in every 5-6 clinically recognized pregnancies, spontaneous abortions (SABs) during the first trimester are the most frequent pregnancy complication in women [9]. Causes of SABs have been identified as chromosomal abnormalities, uterine defects, immunological problems, hormonal imbalance and infections [2][3][4][5][6]. While more than half of all first trimester SABs are associated with chromosomal abnormalities, nearly 40% remain unexplained [6]. With no apparent association between placental villous morphology and fetal chromosomal abnormalities, SABs with either euploid or aneuploid conceptuses demonstrated incomplete cytotrophoblast (CTB) differentiation and compromised invasion [7-9]. These observations prompted our studies of the chromosomal make-up of extra-embryonic cells at materno-embryonic and fetal-maternal interfaces, i.e., the human placenta and the uterine wall. However, as mentioned in the introduction the application of DNA probes described in this review is not limited to investigations of fetal or extra-embryonic tissues.
The novel database mining approach to DNA probe selection described here is a fast and inexpensive solution to the problems of 'probe bottlenecks' in clinical research. Mapping information for BAC clones is publicly available from UCSC or the National Center for Biomedical Information (NCBI)/National Institute of Health, USA, different libraries outside the US, such as the Wellcome Trust Sanger Institute, Hinxton, UK, or the Resources for Molecular Cytogenetics, Dipartimento di Genetica e Microbiologia, Universita' di Bari, Bari, Italy, as well as several commercial sources are available to purchase these clones. The BAC-derived satellite DNA probes also seem to out perform most of the chromosome enumerator probes that are presently in use in research and clinical laboratories. In summary, the procedures described in the present communication allow a laboratory with typical, non-specialist equipment to prepare chromosome-specific DNA probes in just a few days and thus represent the most efficient, rapid and cost-conscious approach to generation of chromosomespecific DNA probes for cytogenetic studies.
DISCLAIMER
This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, or The Regents of the University of California.
CONFLICT OF INTEREST
The authors declare that they have no competing interests.
ACKNOWLEDGEMENTS
The skillful assistance of guests and staff of the Weier laboratory, LBNL, is gratefully acknowledged. This work was supported in parts by NIH grants CA123370, HD45736, CA132815 and CA136685 (to HUW) carried out at the Earnest Orlando Lawrence Berkeley National Laboratory under contract DE-AC02-05CH11231. We acknowledge the support from researchers and staff at the University of California, San Francisco, providing metaphase spreads and placental tissues. | 4,041.8 | 2012-08-31T00:00:00.000 | [
"Biology"
] |
Mutations in NA That Induced Low pH-Stability and Enhanced the Replication of Pandemic (H1N1) 2009 Influenza A Virus at an Early Stage of the Pandemic
An influenza A virus that originated in pigs caused a pandemic in 2009. The sialidase activity of the neuraminidase (NA) of previous pandemic influenza A viruses are stable at low pH (≤5). Here, we identified the amino acids responsible for this property. We found differences in low-pH stability at pH 5.0 among pandemic (H1N1) 2009 viruses, which enhanced the replication of these viruses. Low-pH-stable NA enhancement of virus replication may have contributed to the rapid worldwide spread and adaptation to humans of pandemic (H1N1) 2009 viruses during the early stages of the 2009 pandemic.
Introduction
In the spring of 2009, the pandemic (H1N1) 2009 virus emerged in Mexico and spread rapidly among humans worldwide [1]. Subtypes of influenza A virus are determined by antigenicities of the two envelope glycoproteins, hemagulutinin (HA) and neuraminidase (NA). HA binds to terminal sialic acid of glycoconjugates on the host cell surface as a viral receptor. NA is known to facilitate progeny virus release from the host cell surface through sialidase activity, which cleaves sialic acid from glycoconjugates. Worldwide spread of new subtype influenza A virus in humans is called ''pandemic''. There were three pandemics in the 20 th century: H1N1 Spanish flu in 1918, H2N2 Asian flu in 1957, and H3N2 Hong Kong flu in 1968. Influenza A virus has eightsegmented RNA genomes called PB2, PB1, PA, HA, nucleoprotein (NP), NA, M, and NS. New subtype viruses, which are candidates of pandemic virus, are thought to occur by reassortment of segmented RNA genomes between human virus and other host virus in an intermediate host such as pigs. Multiple factors are associated with the emergence of pandemic influenza viruses including their replicative ability in humans and their antigenicity. For pandemic (H1N1) 2009 virus, the role of mutations in PB2, PB1-F2 (a frame-shift product of PB1 gene), PA, HA, NP, and NS1 has been shown in virus replicability and pathogenicity in cell culture and animals [2,3]; however, the properties of the NA of pandemic (H1N1) 2009 virus are largely unknown with the exception of its resistance to the sialidase inhibitors zanamivir and oseltamivir, which inhibit progeny virus release from the host cell surface.
We previously showed that influenza virus NAs differ in their stability at low pH (#5). All avian virus NAs tested to date are highly stable at low pH; their sialidase activities are retained even after pre-incubation for 10 min at pH 5.0 or less [4]. The NAs of pandemic human viruses, such as 1918 H1N1, 1957 H2N2, and 1968 H3N2 viruses, are also low-pH-stable. On the other hand, the NAs of most seasonal human influenza A viruses (IAVs) are unstable at low pH [4][5][6][7]. Viruses possessing a low-pH-stable NA from a pandemic IAV in the background of A/WSN/1933 (WSN; H1N1) replicated more efficiently in cell culture and mouse lungs compared with a WSN virus possessing a low-pH-unstable NA [8]. Furthermore, we found that the NA of the 1968 pandemic H3N2 virus was low-pH-stable, and that this property disappeared from human H3N2 viruses after 1971 [6]. This research also suggested that a low-pH-stable NA might contribute to a pandemic and play an important role in the adaptation of human viruses.
Here, we examined the low-pH stability of the sialidase activity of the pandemic (H1N1) 2009 viruses. We found differences in the pH stability among their NAs. We also identified the amino acid determinants that confer low-pH stability to pandemic (H1N1) 2009 viruses and used a reverse genetics approach to show that low-pH-stable NA enhances virus replication.
Cells
Human embryonic kidney 293T cells were maintained in high glucose Dulbecco's modified medium supplemented with 10% fetal bovine serum (FBS). Madin-Darby canine kidney (MDCK) cells were maintained in Eagle's minimum essential medium supplemented with 5% FBS. Human lung adenocarcinoma Calu-3 cells (kindly provided by Raymond Pickles, University of North Carolina) were maintained in a 1:1 mixture of Dulbecco's modified medium and Ham's F12 nutrient medium (DF12; Invitrogen, Carlsbad, CA) supplemented with 10% FBS. The NA genes were inserted into the multicloning region between the EcoR I site and the Xho I site of the expression plasmid pCAGGS/MCS vector [9], between the two BsmB I sites of the expression plasmid pCAGGS/BsmBI vector [10], or between the two BsmB I sites of the plasmid pHH21 vector [9]. The V106I and N248D mutations of Cal04 NA were introduced by means of PCR. All NA genes were sequencing using specific primers.
NA genes and plasmids
Sialidase activity of cell-expressed NA 293T cells (1.5610 5 cells/well) in a 24-well tissue culture plate were cultured overnight. The following day, the 70% confluent cells were transfected with a plasmid (1 mg/well) for NA expression by using TransIT-293 (Mirus, Madison, WI). After a 24-h incubation at 37uC, the transfected cells were suspended in phosphate-buffered saline (PBS; 1.2 ml/well), and 50 ml of each cell suspension was transferred into microtubes and centrifuged at 100 6g for 10 min. The cell pellets were incubated with 57 ml of 10 mM acetate buffer (pH 4.0, 5.0, or 6.0) at 37uC for 10 min. Here, when we measured pH profiles of sialidase activities, the transfected cells at 24 h were suspended in PBS (1.2 ml/well), and 15 ml of each cell suspension was transferred into microtubes and centrifuged at 100 6g for 10 min. The cell pellets were incubated with 57 ml of 10 mM acetate buffer (pH 4.0, 4.5, 5.0, 5.5, or 6.0) or 10 mM phosphate buffer (pH 6.0, 6.5, 7.0, 7.5, or 8.0) was used instead of 57 ml of 10 mM acetate buffer (pH 4.0, 5.0, or 6.0). Fifty microliters of each suspension was then transferred to a 96-well black plate on ice and reacted with 2.5 ml of 2 mM 29-(4methylumbelliferyl)-N-acetylneuraminic acid (Sigma-Aldrich Corp., St. Louis, MI) at 37uC for 30 min. The reaction was stopped by the addition of 200 ml of 100 mM sodium carbonate buffer (pH 10.7). The fluorescence intensity (Ex, 355 nm; Em, 460 nm) was measured with an Infinite M1000 microplate reader (Tecan Group Ltd., Mä nnedorf, Switzerland). The sialidase activities of the cell-expressed NA were expressed as a percentage of the activity at pH 6.0.
Generation of reverse genetics viruses
Reverse genetics was performed in the backbone of Cal04 by using the pHH21 vector containing the wild-type NA gene or a mutated Cal04NA gene (2 mutations: from Val to Ile at position 106 and from Asn to Asp at position 248 based on Cal04 NA numbering), together with seven plasmids (pHH21-PB2, PB1, PA, HA, NP, M, and NS) from Cal04. The viruses were propagated using MDCK cells in serum-free medium (SFM), Hybridoma-SFM (Invitrogen Corp., Carlsbad, CA) containing TPCK-trypsin (1 mg/ml). The NA genes of the virus stocks obtained were sequencing by using specific primers.
For growth curves, MCDK cells (1610 5 cells/well) or Calu-3 cells (1610 5 cells/well) in a 24-well plate were cultured overnight. The cells were then infected with viruses at a multiplicity of infection of 0.005 (plaque forming unit/cell) for 30 min at 37uC. After being washed with PBS, the MDCK cells and the Calu-3 cells were cultured in SFM containing TPCK-trypsin (1 mg/ml) and a 1:1 mixture of Dulbecco's modified medium and Ham's F12 nutrient medium containing TPCK-trypsin (0.5 ug/ml) and 0.3% bovine serum albumin, respectively. The virus titers in the supernatant at 17, 26, 43, and 52 h post-infection were measured by means of plaque assays.
Plaque assay
MDCK cells (2.0610 5 cells/well) in a 6-well plate were cultured overnight. The confluent cell monolayers were then washed and incubated for 30 min at 37uC with log dilutions of virus in SFM. The infected monolayers were then overlaid with a solution of SFM containing TPCK-trypsin (1 mg/ml) and 0.5% agarose. The monolayers were incubated at 37uC for 2 days and then fixed with 2 ml/well of 10% formalin solution at room temperature overnight. To visualize viral plaques, the fixed cells were incubated with 1% Crystal Violet solution in 20% methanol and then washed with water. To visualize viral foci, the viral antigens in the infected cells were reacted with a rabbit anti-A/WSN/1933 (H1N1) polyclonal antibody (R309) for 30 min at room temperature and then with horseradish peroxidase-conjugated goat anti-rabbit IgG (Invitrogen Corp., Carlsbad, CA) for 30 min at room temperature. The infected cells were stained by using 3,3'diaminobenzidine, tetrahydrochloride. Each plaque was measured by using Image J release 1.40 g (National Institutes of Health, USA, http://rsb.info.nih.gov/ij/) from scan image.
Phylogenic tree
The phylogenic tree was generated from the NA open reading frame nucleotide sequences of 65 pandemic (H1N1) 2009 viruses and 15 human H1N1 viruses isolated in the 2010/2011 season, together with four swine influenza A viruses used as an out-group to differentiate between the early and late stages in the evolution of pandemic (H1N1) 2009 NA (Table S1), by using DNASTAR Lasergene software (DNASTAR, Inc. Madison, WI).
Low-pH stabilities of the sialidase activities of pandemic (H1N1) 2009 virus NAs
All of the NAs of the avian viruses tested and the pandemic viruses of 1918, 1957, and 1968 were low-pH-stable [4][5][6][7][8]. Since the pandemic (H1N1) 2009 virus NA originated from a Eurasian avian-like swine virus, which was introduced into European pigs from a bird in 1979 [1], the pandemic (H1N1) 2009 virus NA may have retained its low-pH stability. To test this possibility, we examined the low-pH stabilities of the sialidase activities of pandemic (H1N1) 2009 virus NAs by using cell-expressed NA after pre-incubation at pH 4.0, 5.0, or 6.0. None of the NAs of the four pandemic (H1N1) 2009 viruses we tested showed the same low-pH stability of the previous pandemic virus A/Brevig Mission/1/1918 (H1N1) (BM1918 H1N1) [7]. However, the low-pH stabilities of these pandemic (H1N1) 2009 virus NAs appeared to be somewhat higher than those of seasonal H1N1 NAs [7,11]; the NAs of Nor3568 and Nor3858 however retained high sialidase activity at pH 5.0 and even showed activity at pH 4.0. On the other hand, the NAs of Cal04 and WisWSLH had reduced activity at pH 5.0 and lost activity at pH 4.0 ( Figure 1A). The decrease in sialidase activity at pH 4.0 and 5.0 may be due to an optimum pH shift toward a high pH. We therefore checked pH profiles of the sialdiase activities of Cal04 NA and Nor3858 NA. Both NAs showed optimum pH within the range of pH 6.0-6.5 ( Figure 1D). The low-pH stability did not result from an optimum pH shift. This result coincided with results of our previous study showing that the low-pH stability in N2 NA did not result from an optimum pH shift [4].
Identification of the amino acid residues responsible for the low-pH stability of pandemic (H1N1) 2009 virus NA A comparison of the NA amino acid sequences revealed that the NAs of pandemic (H1N1) 2009 viruses had different residues at positions 80, 106, 248, and 257 ( Figure 1B). Since there were two amino acid differences between the low-pH-unstable NA of Cal04 and the low-pH-stable NA of Nor3858 at pH 5.0, the Ile at position 106 and the Asp at position 248 of Nor3858 NA were candidate determinants for the acquisition of low-pH stability. To identify the amino acid residues responsible for the low-pH stability of some of the pandemic (H1N1) 2009 virus NAs, we generated three mutants of Cal04 NA: one with a single mutation at position 106 from Val to Ile (V106I), one with a single mutation at position 248 from Asn to Asp (N248D), and one with two of these mutations. We then measured the low-pH stabilities of the mutant NAs ( Figure 1C). The low-pH stability of the Cal04 NA mutant with both V106I and N248D was similar to that of Nor3568 NA and Nor3858 NA, indicating that both the V106I and the N248D mutations were responsible for the low-pH stability of pandemic (H1N1) 2009 virus NA.
Comparison of the replicability of reverse geneticsgenerated pandemic (H1N1) 2009 viruses possessing low-pH-stable NA or low-pH-unstable NA We previously demonstrated that the low-pH-stable NA of human influenza A virus of subtypes N1 and N2 can enhance virus replication [5,7]. To test whether the low-pH stability of pandemic (H1N1) 2009 virus NA also enhances virus replication, we produced two pandemic (H1N1) 2009 viruses possessing wildtype Cal04 NA or mutated Cal04 NA with the V106I and N248D mutations by using reverse genetics in the backbone of Cal04. The wild-type Cal04 formed very small plaques, whereas the Cal04 NA mutant with V106I and N248D formed clear large plaques ( Figure 2A). Viral antigens in these plaques were confirmed by immunostaining (data not shown). There was a statistically significant difference in plaque size between the two viruses ( Figure 2B). The Cal04 NA mutant showed approximately 10 times higher virus titers than did the wild-type at each time point of the growth curve, not only in MDCK cells ( Figure 2C), but also in Calu-3 cells ( Figure 2D), which were reported to show high replicability of pandemic (H1N1) 2009 virus [12]. These results indicate that the low-pH stability of pandemic (H1N1) 2009 virus NA appears to enhance virus replication, and that pandemic (H1N1) 2009 viruses circulating in 2009 possessed NAs of different low-pH stabilities that are directly linked to virus replicability. [13][14][15]. Position 248, a surface residue, is located near to both the active site and the secondary calcium ion-binding site ( Figure 3A), which is thought to be involved in the conformation of the active site [16,17]. Position 106, an inner residue, is located near to both the primary calcium ion-binding site and the subunit interfaces ( Figure 3B). We previously identified the residues at positions 430, 435, and 454 (BM1918 H1N1 N1 numbering) as important for the low-pH stability of N1 NAs [7] and the residues at positions 344 and 466 (1968 H3N2 N2 numbering) as important for the low-pH stability of N2 NAs [5]; these residues are also located near the enzymatic active site, the calcium ion-binding site, and the subunit interfaces. These findings suggest that similar mechanisms exist by which influenza virus NAs become stable at low pH that are subtypeindependent, although the specific amino acid residues necessary for this conversion may differ among the subtypes.
Discussion
To (Table S1). This suggests that pandemic (H1N1) 2009 viruses with both the V106I and N248D mutations in NA, many of which were isolated from late April 2009 in the United States (Table S1) Table S2). This finding suggests that acquisition of both the V106I and N248D mutation allows for efficient adaptation of the pandemic (H1N1) 2009 virus to humans. However, the low-pH stability of pandemic (H1N1) 2009 virus NA may be lost within several years in order for the virus to adapt to create a longterm epidemic in humans, as has been observed with previous pandemics [6].
For the reverse genetic virus possessing N2 NA, at pH 6.0, absolute sialidase activity of the low-pH-stable NA of 1968 H3N2 virus was approximately two-times lower than that of the low-pHunstable NA with two mutations of arginine to lysine at 344 and of phenylalanine to leucine at 466 (1968 H3N2 N2 numbering) of the corresponding virus. At pH 6.0, absolute sialidase activity of the low-pH-unstable NA of 1968 H2N2 virus was similar to that of the low-pH-stable NA with a mutation of leucine to phenylalanine at 466 of the corresponding virus. Of these viruses, the viruses with low-pH-stable NA showed high replicability in MDCK cells and in a mouse model compared to the viruses with low-pH-unstable NA [8]. In the present study, when the same amounts of NA genes were transfected, fluorescent intensities resulting from sialidase activity at pH 6.0 were 16272 (61680) for Cal04 NA and 13859 (61558) for Cal04 V106 N248D NA. The results of a paired t-test showed that there was no significant difference between these activities. Taken together, the results suggest that enhancement of virus replicability by low-pH-stable NA is not associated with absolute sialidase activity.
Pandemic (H1N1) 2009 virus with low-pH-stable NA formed much larger plaques than did the virus with low-pH-unstable NA. This result is the same as the results for viruses with low-pH-stable NA including H3N2 virus, H2N2 virus [8], and H1N1 virus [7]. We previously investigated which step(s) of the infection process was associated with the low-pH stability in NA of 1968 H3N2 virus and 1968 H2N2 virus. The low-pH stability of their NAs significantly affected virus yield/cell [8]. For pandemic (H1N1) 2009 virus, the low-pH stability of NA is thought to be due to enhancement of virus replication in cells.
We have investigated replicabilities of reverse genetics viruses of WSN strain backbone possessing low-pH-stable N2 NA or low-pH-unstable N2 NA in a mouse model. WSN strain is known to have high level of virulence and lethality in mice. In the lungs of infected mice, the viruses possessing low-pH-stable NA showed much higher replicability than did the viruses possessing low-pHunstable NA. These viruses were not detected in organs other than the lungs in mice [8]. Thus, our previous study has already indicated that enhancement of virus replication by low-pH-stable NA is not limited to MDCK cells. In this paper, we checked replicability of reverse genetics viruses of pandemic (H1N1) 2009 strain backbone possessing low-pH-stable NA or low-pH-unstable NA in human lung adenocarcinoma Calu-3 cells. As was found in MDCK cells, the virus possessing low-pH-stable NA also showed high replicability in Calu-3 cells compared to the virus possessing low-pH-unstable NA. This result also indicates that enhancement of virus replication by low-pH-stable NA is not limited to MDCK cells.
In our previous studies, the NAs of all pandemic human viruses in 1918, 1957, and 1968 were low-pH-stable [4][5][6][7][8]. The low-pHstable NA of pandemic viruses 1918 H1N1 and 1968 H3N2 enhanced replication of reverse genetics viruses with a backbone of WSN strain in cells and in a mouse model [7,8]. Phylogenetic analysis and investigation of the low-pH stabilities of NA indicated that the low-pH-stable NA of pandemic 1968 H3N2 virus was inherited from the low-pH-stable NA of H2N2 virus and replaced with the low-pH-unstable NA of H2N2 virus until 1971 [6]. Since that investigation was carried out only on a year scale, the detailed relationship between low-pH-stable NA and a pandemic remained unknown. In the present study, the NAs of some pandemic 2009 (H1N1) viruses were more low-pH stable than were the NAs of seasonal H1N1 viruses. Our previous studies found that all of three pandemic viruses (1918 H1N1, 1957 H2N2, and 1968 H3N2) in the 20 th century had low-pH-stable NA. In this paper, low-pHstable NA was also confirmed in 2009 pandemic virus. It was also confirmed that low-pH-stable NA enhanced replication of reverse genetics virus even with a backbone of pandemic 2009 (H1N1) virus. For this pandemic, we can investigate transition of the low-pH stability of NA not only on a year scale but also on a day or month scale at an early stage of the pandemic. Determination of NA mutations that induced low-pH stability enabled phylogenic analysis with predictive transition of the low-pH stability from 2009-2010 H1N1 NA genes in a databank. At an early stage of the 2009 pandemic in the United States, the NAs were low-pH unstable until early April in 2009, but almost all of the isolates had low-pH-stable NA caused by two mutations from late April in 2009. At an early stage of a pandemic, acquisition of low-pHstable NA in a pandemic candidate virus might be one of factors promoting the pandemic at an early stage of the pandemic. What is the meaning of the transient gain of low-pH stability in NA?
Enhancement of virus replication might contribute to spread among humans not immunized against new subtype viruses, through low-pH-stable NA that can be acquired from only one or a few mutations in NA. Once new subtype viruses have spread worldwide, the viruses would be difficult to maintain among many humans immunized against the viruses. The viruses might require a decrease of virus replicability to maintain constant antigenic variations and infections under the condition of many immunized humans (epidemic), through low-pH-unstable NA. The present study provides new insights into the mechanism underlying the occurrence of a pandemic.
Supporting Information
Table S1 Accession numbers and Amino acid residues at positions 106 and 248 in NA genes of pandemic (H1N1) 2009 viruses and swine H1N1 influenza A viruses used to generate phylogenic tree. (DOC) | 4,755.8 | 2013-05-16T00:00:00.000 | [
"Biology",
"Medicine"
] |
Advances in surfaces and osseointegration in implantology. Biomimetic surfaces
The present work is a revision of the processes occurring in osseointegration of titanium dental implants according to different types of surfaces -namely, polished surfaces, rough surfaces obtained from subtraction methods, as well as the new hydroxyapatite biomimetic surfaces obtained from thermochemical processes. Hydroxyapatite’s high plasma-projection temperatures have proven to prevent the formation of crystalline apatite on the titanium dental implant, but lead to the formation of amorphous calcium phosphate (i.e., with no crystal structure) instead. This layer produce some osseointegration yet the calcium phosphate layer will eventually dissolve and leave a gap between the bone and the dental implant, thus leading to osseointegration failure due to bacterial colonization. A new surface -recently obtained by thermochemical processes- produces, by crystallization, a layer of apatite with the same mineral content as human bone that is chemically bonded to the titanium surface. Osseointegration speed was tested by means of minipigs, showing bone formation after 3 to 4 weeks, with the security that a dental implant can be loaded. This surface can be an excellent candidate for immediate or early loading procedures. Key words:Dental implants, implants surfaces, osseointegration, biomimetics surfaces.
Introduction
Dental implants represent a valid therapeutic option for the replacement of missing teeth (1). Developments in implantology have allowed to extend the reach of dental treatments through implant placement, since the latter provide long-term stable support for a dental prosthesis subjected to chewing load (2).
The biological principles followed for implant placement have already been described by some authors and can be summarized in the concept of osseointegration, which is defined as the direct and structural connection between living and structured bone, and the surface of an implant subjected to a functional load (3). The earliest studies on this phenomenon were develo-e317 ped by Branemark in the 1950s, 1960s and 1970s, as well as by Schröeder (4), who proved that the alveolar bone is capable of forming a direct connection with a bolt-shaped alloplastic material such as titanium after being placed on a surgically-created bed. Since implantology's earliest stages, the growing interest of clinicians in this type of treatment has impelled research from the knowledge of the biological principles to the basis of osseointegration. A concept emerging from the studies by Johansson and Albrektsson is that osseointegration is a time-related phenomenon. Rigidity in bone-implant interface increases with time until reaching a high level 3 months after implant placement, and can increase progressively until 12 months after placement (5). The time necessary for implant osseointegration is variable, as it depends on a series of factors that in turn depend, in one hand, on the bone and in other hand, on implant features. According to Branemark's (3) protocol, waiting time for implant loading traditionally ranged from 3 to 6 months, depending on the implant's maxillary or jawbone position. The implants used back then were made of commercially-pure titanium obtained by bar mechanization, and their surface topography resulted from their drilling process and their subsequent electrolytic polishing, thus being known as smooth or mechanized surface. The implant's surface features have been proven to influence the cicatrization of the bone surrounding it (6), and the use of rough surfaces as proven -by Beagle's histological studies on dogs-showed that osseointegration can be achieved in a 6-week period under normal conditions with rough surfaces obtained through subtraction methods (7). The morphology of these surfaces is involved in a series of biological events occurring after implant placement, which range from protein adhesion to peri-implant bone remodeling. These phenomena are favored by a particular surface roughness, thus allowing quicker osseointegration, which -from a clinical viewpoint-grants space for prosthesis placement within shorter time-periods. Immediate or early implant loading is a procedure that has been back in use with good medium-to-long-term results in the last years (8). This is partly due to the use of implants with a more osteophilic surface, which allows maintaining implant stability more effectively throughout the first weeks of osseointegration. Reduction in implant primary stability due to initial bone resorption is counterbalanced by quicker bone neoformation, which lead to increased secondary stability and more predictable osseointegration. Implant surface treatment is aimed at providing it with some particular features involving an excellent biological response in the surrounding tissue. There are several methods for dental implant surface treatment such as mechanizing, electropolishing, plasma spraying, coating, acid etching, surface oxidation, ionization, phosphate deposit techniques in some apathetic cases, or any combination of them (9). Implant surfaces can mainly be classified into three main categories according to their biological response: Bioinert, osseoconductive and bioactive surfaces. The first are those around which bone cicatrization occurs from the bone to implant surface (slow cicatrization). The second are characterized by the fact that their surface morphology allows them to produce bone neoformation on implant surface (i.e., the bone starts forming from the surface to the periphery). These can present different roughness degrees and/or topographies that favor interaction with the proteins that promote migration of osteoblast precursor cells depending on their surface processing received. Bioactive surfaces are those around which rapid bone neoformation occurs from implant surface, and are characterized by their surface showing -apart from different roughness degrees-some bioactive molecules or growth factors that induce bone formation according to different action mechanisms. A bioactive implant surface -recently developed and based on the experimental studies by Pattanayak et al. (10)-can imitate osteoblast's formation of the bone mineral part in its early stages. This is possible thanks to the development of a new thermochemical treatment of titanium that creates a calcium phosphate layer once in contact with biological fluids and prior to the arrival of osteoblastic cells. The use of implants with this type of biomimetic surface would allow quicker and more reliable osseointegration for cases of immediate or early implant loading. The present paper is aimed at updating osseointegration mechanisms through the description of tissue response to different implant surfaces, as well as introducing the concept of the new biomimetic surface obtained by means of thermochemical methods.
Implant osseointegration. Present-day concepts
-Present-day concepts The bone is a mineralized connective tissue particularly structured to bear mechanical loads. Direct and structural connection between the living bone and the surface of an implant subjected to functional load was defined as osseointegration by Branemark (3). This phenomenon has been described and researched since the 1950s and still generates interest in modern implantology. The most widely-researched alloplastic material for dental implant manufacture is pure titanium and its alloy Ti6Al4V, always bolt-shaped. Titanium presents good biocompatibility, resistance to corrosion, and excellent mechanical properties. Implant surface osseointegration is what allows the implant to be subjected to chewing loads, which are transmitted to the bone. Osseointegration as described by Branemark (3) is a clinical concept referred more to the stability of the implant subjected to chewing loading and in close contact with the bone rather than to the true microscopic joint of bone tissue and implant surface. This joint is the consequence of the biological events that lead to the interaction of bone cells with implant surface after surgical trauma. The bone reacts to implant placement with a cicatrization process that is very similar to intramembranous ossification produced after bone fracture, except that the neoformed bone is in contact with the surface of an alloplastic material -the implant. We can mainly recognize different biological events during implant-surrounding bone cicatrization -protein resorption, clot formation, granulation tissue formation, provisional matrix formation, interface formation, apposition and bone remodeling.
-Protein adsorption In a first moment after dental implant placement, the latter is blood soaked and the present proteins present will subsequently be absorbed by its surface. The degree of wetting of the implant surface plays a relevant role in blood protein adsorption, since it has been proven that both excessive hydrophilia -unlike generally thoughtor hydrophobia, hinders protein adsorption (11). Indeed, both highly hydrophilic and extremely hydrophobic surfaces allow no formation of a liquid drop with enough volume for proteins to be absorbed by the implant surface. Once blood can ideally soak implant surface, proteins (cytokines) can be absorbed and remain on the surface to work as a signal for the migration of osteoblastic cell lines, which will form the new bone around the implant and allow implant osseointegration. Subsequently, neutrophils and macrophages question the implant and -according to the formation, orientation and type of absorbed proteins (12)-macrophages interact with implant surface and segregate a particular type and number of cytokines (biological molecular messengers) that can either gather the osteoblastic cell line in charge of bone formation in direct contact with surface implant, or the fibroblast cell line that encapsulates biomaterial in fibrous connective tissues and results in osseointegration failure.
Protein adsorption occurs practically instantaneously, thus inhibiting direct cell-biomaterial contact. Indeed, after exposing the surface to contact with blood, adsorption time is around 5 seconds (13). Implant surface's nature of one only layer of absorbed proteins constitutes the key factor of cell response, since cells have been proven to depend on specific proteins to adhere themselves (14). Particularly, osteoblasts demand specific interactions to adhere, proliferate and differentiate, and these interactions are defined by the number and type of proteins adsorbed in implant surface. Implant surface's chemical and topographic nature will determine protein adsorption and conformation in (its) surface (15).
-Types of proteins For osteoblasts to be able to onset bone formation around the implant, they must previously adhere themselves to implant surface. In vitro studies observed that these cells' adhesion depends on some specific proteins absorbed in implant surfaces such as fibronectin, osteopontin and vitronectin. The last protein, proved in in-vitro and in-vivo studies, as the one that usually predominates in cell adhesion processes, followed by fibronectin (16) (Table 1). However, the latter usually acquires more and more relevance once cells onset their differentiation process (17). Implant surfaces play a determining role in the first stages of cell adhesion, since it is their topographic and physicochemical features that are capable of inhibiting the adsorption of the proteins that facilitate the migration of the undesired cells that provoke implant fibrointegration. TGF-α is an example of this, since it is a protein that favors fibroblastic cell line adhesion. For instance, fibroblasts can trigger migration to the implant of undesired cells (18). Pegueroles et al. (11,15) proved in an in-vitro study that surface treatment of titanium dental implants with a specific size (A6) of alumina sand improves fibronectin adsorption relative to smooth titanium surfaces.
-Cell-protein interaction Cells are capable of interacting with proteins by means of cell receptors known as integrins. However, integrinprotein interactions are completed through recognition of a particular amino acid sequence within a protein by
Osteocalcin
Regulation of osteoclasts' activity
Blood clot formation
Some minutes after implant insertion into the bed, a blood clot forms between the implant surface and the bone walls of the created bed. This mainly contains red blood cells, platelets and macrophages in a fibrin scaffold. During the first days a series of cytokines or growth factors (PDGF, TNFα, TGFα, TGFβ, FGF, EGF) are released to stimulate healing of the surgical wound gathering different cell lines. Two to three days after implant placement, leukocytes and macrophages complete 'cleaning' tasks through the phagocytosis process and the blood clot is simultaneously deconstructed through fibrinolysis to leave space for new blood vessels.
-Granulation tissue formation Four days after placement, blood vessel growth produces a granulation tissue that occupies the space between the implant and the bone. This tissue is characterized by the presence of non-differenced mesenchymal cells around vessel structures in a fibrin scaffold. Surgical bed preparation -due to tissue trauma itself, which releases specific cytokines such as BMP2 and BMP4induces the differentiation of non-differentiated mesenchymal cells in the bone marrow and peri-vascular (pericytes) firstly in pre-osteoblasts and subsequently in mature osteoblasts.
-Provisional matrix formation Osteoblastic cells physically move in the space between the bone and the implant, and their migration is guided by the fibrin scaffold. In osseoconductive surfaces such as, for instance, those obtained by blasting and acid etching, cells adhere themselves to the proteins absorbed in implant surface and start forming a provisional bone matrix (20).
Osteoblasts are incapable of producing matrix and move simultaneously, so they stop migrating along the fibrin scaffold once they have started to produce the bone matrix. If the fibrin scaffold is removed from the implant surface during migration, osteoblasts will not reach it directly and no bone formation will therefore take place from the implant surface (20). However, fibrin adhesion to implant depends on the implant's type of surface. On those of smooth or mechanized titanium, fibrin is removed during osteoblast migration, while in rough surfaces fibrin's adhesion force is higher and cells can migrate to reach implant surface. Thus, two main types of osseointegration can be distinguished: contact osseogenesis as described by Osborn et al. (21), in which progressive contact between the bone neoformed from the periphery to the implant bed; and the bone neoformation described by Davies et al. (22), where osteoblasts that can migrate to the implant surface through the fibrin scaffold (,) form new bone from the implant back to the bed walls.
-Bone apposition Bone neoformation starts in early cicatrization stages, and after 7 days a provisional matrix rich in collagen fibers, vascular structures, osteoblasts and some neoformed bone area (bone apposition) begin to form (23). Some growth factors such as BMP 2 and 4 take part by stimulating the later migration of non-differentiated mesenchymal cells and by differentiating osteoblasts (BMP 7). After 14 days the implant-bone gap is occupied by neoformed or woven bone, which is rich in collagen fibers, vascular structures and osteoblasts, which form a reticular structure. In this stage osteoblasts produce the interface bone and can be found, in parallel to the surface, in the osseoconductive surfaces in contact with the implant. Bone neoformation on implant surface in early stages seems more characteristic of rough surfaces than of mechanized titanium (23). At the centre of the neoformed bone tissue some osteocytes can be observed while osteoclasts appear on bed bone surface, thus indicating necrotic bone resorption. During the apposition process, bone structure progressively transforms from reticular to lamellar. Reticular bone is fragile and poor in calcium phosphate crystals, and transforms firstly into bone rich in parallel fibers and then into lamellar bone, which is mineralized tissue capable of withstanding mechanical loadings. The duration of this bone apposition process can vary according to implant surface type, being around 4 weeks on blasting-and acid etching-obtained rough surfaces (24).
-Remodeling Once formed, peri-implantary bone undergoes a remodeling process in which parallel fiber bone is mainly substituted by lamellar bone and bone architecture progressively adapts itself to its functional load (25). In this stage osteoblasts and osteoclasts work synergically, apposing and reabsorbing bone according to functional needs. The bone-implant interface is under continuous remodeling and close contact between peri-implantary bone and the implant is essential to keep it functioning in the long-term. e320
Osseointegration on bioinert and osseoconductive surfaces
Recent years have witnessed a progressive development of dental implants and much resources have been invested to improve implant surfaces. The bolt-shaped implant developed by Adell et al. was a pioneer in implantology and its use has proven good long-term clinical results (26). This titanium implant is characterized by its smooth or minimally-rough (Sa < 0.5 μm) surface, resulting from drilling, which provides it with characteristic unevenness which are repeated showing a clear orientation across the implant (anisotropic surface). This type of surface has been improved throughout the years with the creation of greater roughness so as to facilitate cell adhesion and thus accelerate implant osseointegration. While the first rough surfaces were obtained through additive particle processes such as those obtained by titanium plasma spraying, the most modern rough surfaces are obtained by subtraction methods. Among those most widely-used to obtain rough surfaces, aluminum oxide blasting, acid etching, surface oxidation and combinations of the aforementioned methods stand out (9). These different procedures can produce mainly three types of implant surfaces: micro-structured rough surfaces (Sa = 0.5-1 μm), moderately rough surfaces (Sa = 1-2 μm), and highly rough surfaces (Sa > 2 μm) (27). Results in literature confirm the greater effectiveness of rough surfaces relative to mechanized titanium ones, since a greater ratio of bone surface enters in contact with the implant (5), and they lead to improved (28) and quicker (24) osseointegration. These results may be explained by the apparent different cell response in the earliest osseointegration stages. Firstly, surface roughness leads to significantly increased wetting and protein absorption, which in turn favor cell migration and adhesion (11). However, Davies et al. hold that more favorable osseointegration is due to the clot's fibrin scaffold's greater adhesion force on rough vs. smooth surfaces (20). The fibrin scaffold allows osteoblast migration toward implant surface before these cells start to produce calcium phosphate crystals (hydroxyapatite). If fibrin's adhesion capacity to implant surface exceeds the threshold, it shall be enough to allow osteoblasts to migrate through the scaffold and get in contact with implant surface. However, in mechanized titanium surfaces, no sufficiently stable bond occurs between it and fibrin so as to withstand the 'weight' of osteoblasts during their migration, thus producing separation between the implant and the fibrin scaffold. In this situation osteoblasts do not reach implant surface and new nuclei of bone formation will be placed closer to implant bed and far from implant surface. On the contrary, the fibrin scaffold on rough surfaces does not set free from the implant during osteoblast migration due to its tighter surface bond, thus allowing osteoblasts to reach the surface and start the bone apposition process. Thus, difference can be made between mechanized titanium bioinert surfaces in which 'contact osseointegration' occurs (21) -i.e., progressive bone apposition from bed periphery to implant surface; and, on the other hand, osseoconductive surfaces, where the 'bone neoformation' can be observed -i.e., bone apposition contemporarily from implant surface and bed (22).
-A new paradigm -the biometric surface Rough surfaces obtained through subtraction methods such as aluminum (Al2O3) particle blasting and acid etching prove improvements in in-vivo response relative to smooth surfaces (28). This procedure gets a surface topography characterized by concavities that form peaks and valleys that favor increased osteoconduction and, consequently, quicker bone growth with increased bone adhesion force (24). However, the use of these surfaces still leads to reduced implant stability for the time period between the second and fourth week after placement. This is due to resorption of the bone initially in contact with the implant and, and also, to still slow bone neoformation, which fails to confer stiffness to the bone-implant bond. Consequently, increased amount of implant micro-movements may occur. Increased implant movements have been proven to determine the formation of a fibrous connective tissue and finally lead to its failure (29). This phenomenon has been observed to occur more frequently when the implant is subjected to functional load in its cicatrization stage, such as immediate loading. In these procedures the surface's biological behavior gathers still more relevance, since the objective of obtaining an ideal surface also includes increasing implant stability during the critical stage of osseointegration. Once the implant gets in contact with the bed after placement, osteoblasts from mesenchymal cells in the bone marrow form the first layers of calcium phosphate on the implant surface. This process, which is responsible for the formation of the first reticular bone, takes place by the process of bone resorption of the bed walls launched by osteoclastic cells.
A surface that provides quicker bone apposition in the first weeks after placement would allow lower reduction of implant stability during this critical stage and, thus, lower risk of osseointegration failure in an implant subjected to chewing load. The use of coatings with similar composition to that of the bone are an attractive strategy to accelerate osseointegration during the earliest cicatrization stages. Particularly, calcium phosphate apatite has the same chemical composition as the mineral bone phase, so that complete acceptance by the organism and no inflammatory reaction occurs (30). Many researches have applied e321 coatings on titanium implants by different techniques such as hydroxyapatite plasma spraying (31). Although literature reports good clinical results for hydroxyapatite-coated implants and results are comparable to those achieved with titanium-surface ones (32), coatings obtained with certain techniques seem to have important drawbacks such as scarce adherence between implant titanium and the hydroxyapatite layer. In fact, additive techniques such as hydroxyapatite plasma spraying doesn't allow formation of crystalline apatite but amorphous calcium phosphate due to high elaboration temperatures (33). The properties of this layer are not appropriate, since it is extremely soluble and titanium only achieves mechanical retention, no true adhesion. Indeed, plasma spraying-obtained hydroxyapatite surfaces have proven scarce long-term clinical behavior, where -in spite of obtaining quick initial implant osseointegration-detachment of the osteophilic surface layer with time produces bacterial filtration into the interface and progressive osseointegration loss due to peri-implantitis (34). New studies have recently proven that other methods to obtain phosphate calcium coating with higher homogeneity and chemical stability are possible (35). These new methods propose in-vitro apatite growth directly bound to the surface, thus achieving greater adherence and layer-thickness control. This can be achieved through surface, thermal and chemical treatments. Pattanayak et al. completed apatite deposits based on the formation of a thick and amorphous gel of surface sodium titanate that, once immersed in ion-supersaturated serum (mainly calcium and phosphorus), can spontaneously generate a thin apatite layer that increases direct and structural connection with the structured bone (10). There are huge differences between this thermochemical treatment and those producing calcium phosphate deposits by plasma, since plasma starts from very high temperatures (6000-9000 ºC), under which the projected calcium phosphate is in plasma state and solidifies when launched to the dental implant. The first fact is that plasma solidification provides no crystalline calcium phosphate structure but an unstructured material known as calcium amorphous state, which cannot be known as apatite because it has no crystalline structure and is more similar to a frozen liquid. This is a very important aspect because in plasma-coated surfaces amorphous calcium phosphate dissolves much quicker than the crystalline phosphate. On the non-crystalline calcium phosphate layer also starts osteoblasts' bone apposition process, although this neoformed bone will not get in direct contact with the implant surface when the calcium phosphate layer has dissolved; consequently, this phenomenon delays initial stages of the osseointegration process. Besides, calcium phosphate cooled down from such high temperatures is very fra-gile, since ceramic materials cannot withstand volume changes caused by such sudden temperature changes. Finally, the main limitation of plasma-formed layers is that they present no titanium-layer chemical bond, and their stability is mainly due to some mechanical clamp between titanium roughness and the amorphous mass of calcium phosphate. The biological consequence of this phenomenon is a bacterial microfiltration in the interface that in turn leads to osseointegration loss due to progressive peri-implantitis (34).
Obtaining calcium phosphate on implant surface by means of thermochemical treatments involves numerous advantages. Firstly, calcium phosphate is not organized amorphously but in a crystalline way, since it is formed by precipitation. This makes its structure (measured by X ray diffractograms) be the same as the calcium phosphate that forms bone mineral content (hydroxyapatite) (36), which provides a material with lower dissolving capacity in biological fluids and allows titanium chemical covalent bonds (37). This chemical bond renders excellent long-term stability and prevents all bacterial colonization between the calcium phosphate and the titanium (38). Another important advantage of the thermochemical treatment relative to other hydroxyapatite obtaining methods is the obtained layer's high mechanical resistance, since high temperature changes in plasma treatment are avoided (39). This method can be said to provide a biomimetic surface, since the implant-covering sodium titanate layer can -thanks to Na+ ion bioactivity, and once it gets in contact with biological fluids-form on its own a hydroxyapatite layer without the need of osteoblasts taking part. This phenomenon has been proven both in-vitro and in-vivo by our research group, and accelerated osseointegration has been observed relative to untreated surfaces (28,37) (see Fig. 1). Gil et al. have proven in histological studies in minipigs that thermochemical treatment of dental implant type-3 titanium surfaces can render full implant osseointegration within 4 weeks (37).
In their most recent study Gil et al. focused on the osseointegration capacity of 320 implants in minipigs, comparing bone response to different types of surface (37). The assayed surfaces were biomimetic surfaces obtained by combined aluminum oxide blasting and acid etching plus thermochemical treatment, rough surface obtained by aluminum oxide blasting, rough surface obtained by acid etching, and smooth surface as control.
The implants used in this study were characterized by their 1.5-mm polished neck, 12-mm length and 1-mm thread pitch. Implant surface roughness was characterized first through electron microscopy, measuring surfaces contact angles and then the in-vivo test was completed by placing implants into minipigs to which teeth were extracted 4 months before. Four implants of each type were placed in each animal, which were slaughtered 3 days, and 1, 2, 3 and 10 weeks after intervention to complete histological studies. Regarding surface characterization, no significant differences were observed in roughness values (Sa and Sm) between the biomimetic surface and the blasting-obtained rough surface. However, significant differences were found between these two and the acid etching-obtained rough surface (see Table 2). The biomimetic surface proves lower contact angle relative to the blasting-obtained rough one, which shows up greater wetting and better behavior under blood contact.
Regarding bone-implant contact (i.e., the ratio of bone in contact with the implant), the biomimetic surface proves significantly higher values relative to the other surfaces 3 days, and 1, 2, 3 and 10 weeks after placement, though similar values are observed in blasting-obtained rough surface after 10 weeks (see Fig. 2). This surface has presented surprisingly high osseointegration values in early cicatrization stages, being around 75% and 80% 2 and 3 weeks, respectively, after placement in this animal model. The biomimetic surface was the only one that clearly showed extensive areas of bone neoformation in direct contact with the implant after only one week of cicatrization (see Fig. 3). This e323 phenomenon can be explained by the combination of osseoconductive phenomena provided by thermochemical treatment, which in turn naturally leads to the formation of calcium phosphate crystals on the implant surface once it gets in contact with biological fluids. These encouraging results in this new surface can contribute to great clinical benefits for the application of immediate or early implant-loading protocols, however still need to be confirmed by clinical tests on humans, which are currently in developmental stages.
Conclusions
Dental implant osseointegration is a phenomenon that has been studied for a long time. However, recent bioengineering has enabled us to understand the different biological events that characterize it -namely, protein adsorption, clot formation, granulation tissue formation, provisional matrix formation, interface formation, bone apposition and remodeling. Protein adhesion has proven to play a key role in the earliest stages of osseointegration, where the presence Step: rough surface obtained by aluminum oxide particle-blasting, acid etching and thermochemical treatment. Bone neoformation occurs significantly more quickly on the rough surface obtained by aluminum oxide particle-blasting, acid etching and thermochemical treatment, whose bone-implant contact (BIC) is over 70% and maximum at two and three weeks after stabilization, respectively. of fibronectin and vitronectin favor osteoblastic cell line proliferation, while proteins such as TGF-α inhibit it. Rough implant surfaces (Sa) over 1-2 μm lead to quicker osseointegration relative to micro-rough surfaces (Sa = 0.5-1 μm) due to the phenomenon of bone neoformation, where bone starts to form from implant surface toward the periphery at greater speed. Implants presenting hydroxyapatite in their surface lead to accelerated osseointegration due to osteoblasts' affinity to calcium phosphate. However, the surfaces produced up to date have presented long-term problems due to the bonding of this layer to the underlying titanium. A biomimetic surface has been developed by means of thermochemical processing of titanium that allows the formation of a calcium phosphate layer in crystalline shape (hydroxyapatite), when the implant gets in contact with biological fluids. Studies in animals prove that this new surface can produce osseointegration in significantly shorter times relative to rough surfaces obtained by aluminum oxide blasting and acid etching. Invivo studies show full implant osseointegration within 3 weeks, which would facilitate the use of immediate and early loading protocols. These encouraging results need to be confirmed by clinical studies.
A)
B) C) Fig. 3. A) Acid-etching rough surface implant histology 3 day s(a), 1 (b), 2 (c), 3 (d) and 10 weeks (e) after placement. B) Histology of an rough surface implant obtained by aluminum oxide particle-blasting and acid etching 3 days, 1, 2, 3 and 10 weeks after placement. Surface shows accelerated ossification relative to the treatment including only acid etching. C) Histology of a rough surface implant obtained by aluminum oxide particle-blasting, acid etching and thermochemical treatment 3 days, 1, 2, 3 and 10 weeks after placement. Surface shows accelerated ossification relative to the treatment including aluminum oxide blasting and acid etching. Note the abundant presence of neoformed mature bone in contact with the implant surface. | 6,776 | 2015-02-07T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Effects of Dietary Phosphate on Adynamic Bone Disease in Rats with Chronic Kidney Disease – Role of Sclerostin?
High phosphate intake is known to aggravate renal osteodystrophy along various pathogenetic pathways. Recent studies have raised the possibility that dysregulation of the osteocyte Wnt/β-catenin signaling pathway is also involved in chronic kidney disease (CKD)-related bone disease. We investigated the role of dietary phosphate and its possible interaction with this pathway in an experimental model of adynamic bone disease (ABD) in association with CKD and hypoparathyroidism. Partial nephrectomy (Nx) and total parathyroidectomy (PTx) were performed in male Wistar rats. Control rats with normal kidney and parathyroid function underwent sham operations. Rats were divided into three groups and underwent pair-feeding for 8 weeks with diets containing either 0.6% or 1.2% phosphate: sham 0.6%, Nx+PTx 0.6%, and Nx+PTx 1.2%. In the two Nx+PTx groups, serum creatinine increased and blood ionized calcium decreased compared with sham control group. They also presented hyperphosphatemia and reduced serum parathyroid hormone (PTH) and fibroblast growth factor 23 (FGF23) levels. Fractional urinary excretion of phosphate increased in Nx+PTx 1.2% rats despite lower PTH and FGF23 levels than in sham group. These biochemical changes were accompanied by a decrease in bone formation rates. The Nx+PTx 1.2% group had lower bone volume (BV/TV), higher osteoblast and osteocyte apoptosis, and higher SOST and Dickkopf-1 gene expression than the Nx+PTx 0.6% group. Nx+PTx 0.6% rat had very low serum sclerostin levels, and Nx+PTx 1.2% had intermediate sclerostin levels compared with sham group. Finally, there was a negative correlation between BV/TV and serum sclerostin. These results suggest that high dietary phosphate intake decreases bone volume in an experimental model of CKD-ABD, possibly via changes in SOST expression through a PTH-independent mechanism. These findings could have relevance for the clinical setting of CKD-ABD in patients who low turnover bone disease might be attenuated by optimal control of phosphate intake and/or absorption.
Introduction
The chronic kidney disease associated mineral and bone disorder (CKD-MBD) is characterized by complex endocrine and metabolic disturbances, with a wide variability in terms of bone turnover, ranging from extremely low to extremely high bone formation rates [1]. Adynamic Bone Disease (ABD) as one extreme of the different forms of renal osteodystrophy has become an increasingly common manifestation of bone abnormalities in CKD patients. ABD may be associated with serious clinical consequences such as fractures [2,3] and vascular calcification [4,5], which in turn may contribute to the high mortality rates of patients with CKD.
The hallmark of ABD is a decrease in bone turnover together with normal or low osteoid surface [1,[6][7][8][9][10]. The pathogenesis of ABD linked to CKD is multifactorial, including old age, diabetes mellitus, uremic toxins, and excessive suppression of secondary hyperparathyroidism by high calcium input via the dialysate or calcium-containing phosphate binders, pharmacological doses of active vitamin D sterols or parathyroidectomy [11,12]. At the cellular level, resistance to parathyroid hormone (PTH) secondary to PTH receptor downregulation and decreased osteoblast number and activity are prevalent features. The latter results from reduced osteoblast proliferation and enhanced apoptosis, which are important factors in the determination of bone formation rates [11]. However, a clear understanding of the molecular mechanisms that lead to ABD, as well as the potential role of other bone cell types, in particular the osteocyte, is still lacking.
Osteocytes synthesize bone remodeling factors including the receptor activator of nuclear factor kappa-B ligand (RANKL), osteoprotegerin (OPG) and sclerostin and thus participate in the control of osteoclast and osteoblast activity [13][14][15][16]. Recent studies have suggested that the Wnt/β-catenin signaling pathway, which is expressed not only in the osteoblast, but also in the osteocyte, plays a role in the regulation of normal bone turnover [17] and in renal osteodystrophy [18].
Wnt/β-catenin pathway inhibitors such as sclerostin, encoded by SOST gene and produced by mature osteocytes, and Dickkopf-1 (Dkk-1), encoded by Dkk-1 gene and expressed by a variety of cells [19], antagonize Wnt/β-catenin canonical signaling and thereby lead to decreased bone formation [20]. A recent study has shown that the serum levels of both inhibitors are elevated in hemodialysis patients, with an inverse correlation of serum sclerostin with serum PTH and bone formation rate [21].
The effects of PTH on bone may be mediated, at least partly, by changes in sclerostin expression. Exogenous administration of PTH has been shown to result in downregulation of osteocytic sclerostin expression, both in vivo and in vitro [22,23]. In addition to PTH, other factors are possible regulators of SOST gene expression, like decreased mechanical loading [24].
Whether suppression of PTH secretion contributes to CKDassociated ABD via changes in Wnt/β-catenin pathway activity has not been investigated. Although ABD in CKD patients occurs most frequently in the context of low or normal serum PTH levels it can also be observed in presence of high PTH levels [25]. Therefore, factors other than PTH clearly play a role as well, including calcium and phosphate overload. In order to gain a more detailed insight into the pathogenesis of this disease, we induced ABD in an experimental rat model of hypoparathyroidism combined with chronic kidney failure. Our main purpose was to examine the influence of the uremic state itself and the importance of phosphate overload.
Materials and Methods
This study was carried out in strict accordance with the recommendations in the Guidelines of the standing Committee on Animal Research of University of São Paulo. The protocol was also approved by the Committee on the Ethics of Animal Experiments of University of São Paulo (Permit Number: 0962/08). All surgery was performed under pentobarbital anesthesia, and all efforts were made to minimize suffering.
Experimental protocol
Male Wistar rats, initial body weight 300-350 g, were obtained from our local breeding colony for use in this study. They were housed in individual cages in a light-controlled environment (12 h on/12 h off), at constant temperature (25°C) and humidity (25%) and fed a standard diet (Lab Diet 5002, Purina Mills, USA), containing phosphate (0.6%), Ca (0.8 %), protein (20%) and vitamin D3 2.2 IU/g, for one week. Thereafter, they were anesthetized with pentobarbital (50 mg/kg I.P.) and divided into three groups. Two groups underwent total parathyroidectomy (PTx), involving microsurgical technique using electrocautery, and 5/6 nephrectomy (5/6 Nx) as described previously [26]. A third group underwent sham operation (sham Nx+sham PTx). One day after surgery, animals were divided into three groups and allocated to different diets: Nx+PTx 0.6%, which received a 0.6% phosphate diet (Lab Diet 5002, Purina Mills, USA); Nx +PTx 1.2% which received a 1.2% phosphate diet (Modified Lab Diet 5002 w/1.2% P, USA) and sham group (sham Nx +PTx), which received a 0.6% phosphate diet (Lab Diet 5002, Purina Mills, USA). Thus all diets had same composition of Ca, protein and vitamin D 3 , except for phosphate content. A pairfeeding protocol was used, where the amount of feed provided to the pair of animals was determined by the animal of the pair that had eaten less food. Weight measurements and tail cuff plethysmography recordings were performed weekly. Water access was ad libitum. The study duration was 8 weeks. A fluorochrome bone marker (Terramycin®) at a dose of 25 mg/kg was injected I.P. on days 11 and 12, as well as on days 4 and 5 before sacrifice. For the last two days of the study, the rats were held in metabolic cages and 24 h urine samples were collected. Eight weeks after surgery, rats were anesthetized and sacrificed through aortic puncture exsanguination. Serum samples were frozen at -20°C for later biochemical evaluation. The heart was excised and left ventricle dissected for weight. Femurs were removed for bone histomorphometry and tibiae were removed for evaluation of osteoblast and osteocyte apoptosis and gene expression.
Bone histomorphometry
At sacrifice, the left femur of each rat was removed, dissected free of soft tissue, immersed in 70% ethanol, and processed as described previously [28]. Static, structural and dynamic parameters of bone formation and resorption were measured in distal metaphyses (magnification, 250x; 30 fields), 195 µm from the epiphyseal growth plate, using an Osteomeasure image analyzer (Osteometrics, Atlanta, GA, USA). Structural parameters included trabecular thickness, trabecular separation (expressed in µm) and trabecular number (expressed as number/mm). Static parameters included ratios of trabecular volume/bone volume, osteoid volume/bone volume, osteoid surface/bone volume, osteoblast surface/bone volume, fibrosis volume, eroded surface/bone surface, osteoclast surface/bone surface, all expressed as percentages, and osteoid thickness, expressed in µm. Mineral apposition rate was determined from the distance between the two terramycin labels, divided by the time interval between the two terramycin administrations and expressed in µm/day. Mineralization lag time was expressed in days. The percentage of double terramycin-labeled (mineralizing) surface per bone surface and bone formation rate completed the dynamic evaluation. Histomorphometric indices were reported using nomenclature recommended by the American Society of Bone and Mineral Research [29]. All animal data were obtained through with the examiners blinded to the study protocol.
Osteoblast and osteocyte apoptosis
Apoptosis was determined in the left tibia by TUNEL technique (TdT-mediated X-dUTP Nick end labeling, using the instructions provided by Apoptag plus Peroxidase in Situ Apoptosis Detection Kit. To evaluate the percentage of apoptotic osteoblasts and osteocytes in the cortical and trabecular areas as well as in bone marrow, we used the counting points method. Each cell type was analyzed in 60 fields, with a magnification of 1.000x, to obtain final values expressed as percent apoptotic cells.
For these analyses, bones were harvested and snap frozen in Trizol (Sigma, St. Louis, MO, USA). Bone shafts were collected, epiphyses removed, bone marrow separated via centrifugation, and the shafts placed in Trizol (Sigma, St. Louis, MO, USA). RNA was extracted using the chloroform and isopropanol precipitation method. The extracted RNA was treated with DNase, purified on a Qiagen (Valencia, CA, USA) column and eluted in RNAse free water. A reverse transcriptase reaction was performed subsequently. The generated cDNA was used in single Taqman assays or Taqman low density arrays (Applied Biosystems, Carlsbad, CA, USA) containing genes of interest and assayed according to the manufacturer's protocol. The difference in expression was calculated using 18S as the control gene.
Statistical analysis
Results are presented as mean ± standard deviation (SD) or as median (interquartile ranges). One-Way ANOVA and Kruskal Wallis test were used for parametric and nonparametric data respectively. A linear regression test (Spearman) was used to assess the correlation between two variables. GraphPad Prism software, version 4.0 (GraphPad, San Diego, CA, USA) was used. P values <0.05 were considered statistically significant.
General data
Initial body weight was comparable among the 3 rat groups. Nx+PTx groups had lower food intake, lower final body weight, higher tail cuff pressure (TCP) and higher heart weight compared to sham group. We did not observe differences in food intake, final body weight, TCP or heart weight between Nx +PTx 0.6% and Nx+PTx 1.2% groups (Table 1).
Laboratory findings
As shown in Table 2, Nx+PTx rats had lower creatinine clearance (Ccreat), with correspondingly higher serum creatinine, and phosphate levels and higher albuminuria, as well as markedly lower blood iCa and serum FGF23 levels than sham group. Fractional excretion of phosphate phosphate FE was higher in Nx+PTx 1.2% than in the other groups. Serum calcitriol and calciuria did not differ between the three groups. Nx+PTx groups had 4-10 times lower median PTH levels than sham group but the differences did not reach statistical significance. There were no differences between Nx+PTx groups in regards to serum creatinine, phosphate, FGF23 and PTH, blood iCa, Ccreat, and albuminuria. Finally, serum sclerostin was lower in Nx+PTx 0.6% group than Nx+PTx 0.12% and sham groups, despite similar serum phosphate levels in Nx+PTx groups.
Bone histomorphometry
As shown in Table 3, both Nx+PTx rat groups showed lower osteoid volume, osteoid surface, osteoblastic and osteoclastic surfaces, eroded surface, mineralization surface, mineral apposition rate, and bone formation rate relative to control sham animals, and there was no fibrosis. These findings confirmed the achievement of a low bone remodeling status. Nx+PTx 0.6% group had higher bone volume with a corresponding lower trabecular separation and higher trabecular number than Nx+PTx 1.2% group and sham group, respectively. In addition, Nx+PTx 0.6% animals also showed higher mineralization lag time and lower adjusted apposition rate than the two other animal groups (Table 3).
We did not observe any differences in eroded surface or osteoclastic surface between Nx+PTx groups (Table 3). Interestingly, a negative correlation between bone volume and serum sclerostin was found (Figure 1).
Osteoblast and osteocyte apoptotic rate
Nx+PTx 0.6% rats had a lower osteoblastic and osteocytic apoptotic rate compared to sham group. The percentage of apoptotic osteocytes and osteoblasts was higher in Nx+PTx 1.2% than in Nx+PTx 0.6% group (Table 4).
Discussion
In this study, we evaluated the effects of dietary phosphate on ABD in rats with CKD and hypoparathyroidism. Compared to sham group, Nx+PTx rats had higher serum creatinine, lower Ccreat, and hyperalbuminuria consistent with the induction of CKD.
Importantly, both Nx+PTx groups had hyperphosphatemia, with no differences between them. However, Nx+PTx 1.2% rats had an increase in phosphate FE subsequent to the higher phosphate load. PTx resulted in blood iCa levels which were reduced by more than half, and was effective in preventing the usual CKD-associated hyperparathyroidism. Hypocalcemia was taken as evidence of Table 4. Osteoblastic and osteocytic apoptotic rates (%). successful extirpation of the parathyroid glands, as already described by other authors [30,31]. However, the inhomogeneous reduction of serum PTH levels in the PTx+Nx animals is probably due to low sensitivity of the PTH assay in the hypoparathyroid range, or it is due to hypocalcemia, that may have stimulated some small remnant gland in some animals, since PTH levels were measured after 8 weeks of PTx. Nevertheless, histomorphometry demonstrated low bone remodeling, dismissing any possibility of bone effects of hyperparathyroidism. Decreased levels of serum FGF23 resulted from PTx and this observation is in agreement with previously reports [32,33]. Our results show that dietary phosphate overload, can stimulate phosphate FE even when both PTH and FGF23 levels are decreased. It suggests that a different renal tubule regulation may exist, separate from a change in luminal phosphate delivery, in a situation of ABD, hypoparathyroidism and phosphate overload. Certainly, future studies are needed to evaluate this mechanism, since little is known about the crosstalk between intestine and other organs, in the setting of ABD. The two Nx+PTx rat groups presented lower bone turnover. However, Nx+PTx 1.2% group had reduced bone volume compared to Nx+PTx 0.6% group. Induction of bone loss by high phosphate intake has been described in normal individuals [34,35] and animals [36]. In CKD patients, studies did not have shown association between P and osteoporosis. Tani et al. [37] demonstrated in mature rats that prolonged exposure to dietary phosphate excess induces bone loss associated with secondary hyperparathyroidism. However, we have previously reported lower bone volume in Nx+PTx rats fed with high P diet (1.2%) that developed hyperphosphatemia in presence of normal PTH. These findings suggest that PTH elevation is not absolutely necessary for phosphate-associated reductions in bone volume [26]. Furthermore, the phosphate effect on bone was independent of PTH infusion rate [38].
Sham
In addition, the increase in bone volume observed in Nx+PTx 0.6% group, as compared to Nx+PTx 1.2% group, may be secondary to surgical hypoparathyroidism, which leads to an imbalance between resorption and formation in favor of the latter, resulting in increased bone mass at both cortical and trabecular sites, in humans [39]. However, such findings may only partially apply to the present animal study because of the concomitant presence of CKD.
An alternative mechanism may be the involvement of Wnt/βcatenin pathway in the pathogenesis of CKD-MBD. Because of low PTH levels in the present study, we expected to observe high serum sclerostin levels in both Nx+PTx groups. However, only when the phosphate intake was very high, namely in the Nx+PTx 1.2% group, did SOST mRNA expression increase to higher values than normal and did serum sclerostin levels return to the normal range. The differences between the two CKD rat groups were observed despite similar serum phosphate, PTH and creatinine levels, suggesting that dietary phosphate directly or indirectly regulates β-catenin activity, at least partially through modulation of SOST gene activity. The observed inverse correlation between serum sclerostin and bone volume is consistent with the well-known role of β-catenin in bone mass regulation. Supporting our results, a recent study with predialysis CKD patients showed that phosphate FE and serum Sclerostin levels were elevated at baseline. After therapy with the phosphate binder sevelamer, a decrease in serum Sclerostin was seen despite a significant decrease in serum PTH, suggesting the role of dietary phosphate in the modulation of sclerostin production [40].
The increase in osteoblast and osteocyte apoptosis in Nx +PTx rats in response to high phosphate intake confirms a previous in vitro study by Meleti et al [41] in osteoblast-like cells. However, apoptosis could also be mediated by SOST since canonical Wnt signaling appears to protect against programmed cell death through β-catenin dependent mechanisms.
As regards the other analyses of changes in skeletal gene expression higher Dkk-1 mRNA levels in Nx+PTx 1.2% relative to Nx+PTx 0.6% and sham animals. We still do not know the real role of Dkk-1 on renal osteodystrophy. Another interesting finding was that Gsk3b, which leads to phosphorylation of βcatenin and stimulates β-catenin degradation was lower in CKD animals fed 0.6% phosphate than those fed 1.2% phosphate diet, again suggesting that dietary phosphate is involved in the regulation of Wnt/β-catenin signaling. In addition, phosphate was also shown to increase RANK gene expression, which could equally have contributed to the observed lower bone volume.
Our study has several limitations. First, we did not evaluate inflammatory markers, which could have influenced serum sclerostin levels. Second, serum phosphate levels and phosphate FE were only in fasting state. Third, there was no correlation between the serum levels of sclerostin and SOST gene expression values. A longer observation time might have been necessary to observe increased circulating sclerostin resulting from the increase in SOST mRNA. Another possibility is that there are still technical problems with the determination of serum sclerostin, both in humans and in animals. Fourth, for the analyses of gene expression in bone, a small number of samples was analyzed. Nevertheless, we believe that did not invalidate our results, since the numbers were very uniform with almost no variation within groups and with variation of almost ten times among them [42]. Fifth, for calcitriol measurements we used pools of sera, allowing comparison of only few samples per group which could explain the observed absence of significant differences between groups. Finally, we did not include a control group put on 1.2% phosphate diet, because these animals would almost certainly have developed secondary hyperparathyroidism.
In conclusion, high as compared to normal dietary phosphate intake reduces bone volume in CKD rats with ABD. We show for first time that in this condition dietary phosphate intake stimulates Wnt pathway suppressors, regulating bone SOST and Dkk-1 mRNA expression independently of PTH. High phosphate intake also increases Gsk3b mRNA and RANK mRNA expression, and enhances osteoblast and osteocyte apoptosis in CKD rats with ABD. The underlying mechanisms require further study. Finally, when trying to extrapolate these findings to the clinical setting, they would tend to underscore the importance of controlling dietary phosphate intake and absorption in patients with CKD and ABD. | 4,481.4 | 2013-11-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
Popular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases. Adversarial learning may help models ignore sensitive biases and spurious correlations in data. We evaluate whether adversarial learning can be used in NLI to encourage models to learn representations free of hypothesis-only biases. Our analyses indicate that the representations learned via adversarial learning may be less biased, with only small drops in NLI accuracy.
Introduction
Popular datasets for Natural Language Inference (NLI) -the task of determining whether one sentence (premise) likely entails another (hypothesis) -contain hypothesis-only biases that allow models to perform the task surprisingly well by only considering hypotheses while ignoring the corresponding premises.For instance, such a method correctly predicted the examples in Table 1 as contradictions.As datasets may always contain biases, it is important to analyze whether, and to what extent, models are immune to or rely on known biases.Furthermore, it is important to build models that can overcome these biases.
Recent work in NLP aims to build more robust systems using adversarial methods (Alzantot et al., 2018;Chen & Cardie, 2018;Belinkov & Bisk, 2018, i.a.).In particular, Elazar & Goldberg (2018) attempted to use adversarial training to remove demographic attributes from text data, with limited success.Inspired by this line of work, we use adversarial learning to add small components to an existing and popular NLI system that has been used to learn general sentence representations (Conneau et al., 2017).The adversarial A dog runs through the woods near a cottage The dog is sleeping on the ground A person writing something on a newspaper A person is driving a fire truck A man is doing tricks on a skateboard Nobody is doing tricks Although recent work has applied adversarial learning to NLI (Minervini & Riedel, 2018;Kang et al., 2018), this is the first work to our knowledge that explicitly studies NLI models designed to ignore hypothesis-only biases.
Methods
We consider two types of adversarial methods.In the first method, we incorporate an external classifier to force the hypothesis-encoder to ignore hypothesis-only biases.In the second method, we randomly swap premises in the training set to create noisy examples.
General NLI Model
Let (P, H) denote a premise-hypothesis pair, g denote an encoder that maps a sentence S to a vector representation v, and c a classifier that maps v to an output label y.A general NLI framework contains the following components: • A premise encoder g P that maps the premise P to a vector representation p.
• A hypothesis encoder g H that maps the hypothesis H to a vector representation h.
• A classifier c NLI that combines and maps p and h to an output y.
In this model, the premise and hypothesis are each encoded with separate encoders.The NLI classifier is usually trained to minimize the objective: where L(ỹ, y) is the cross-entropy loss.If g P is not used, a model should not be able to successfully perform NLI.However, models without g P may achieve non-trivial results, indicating the existence of biases in hypotheses (Gururangan et al., 2018;Poliak et al., 2018;Tsuchiya, 2018).
AdvCls: Adversarial Classifier
Our first approach, referred to as AdvCls, follows the common adversarial training method (Goodfellow et al., 2015;Ganin & Lempitsky, 2015;Xie et al., 2017;Zhang et al., 2018) by adding an additional adversarial classifier c Hypoth to our model.c Hypoth maps the hypothesis representation h to an output y.In domain adversarial learning, the classifier is typically used to predict unwanted features, e.g., protected attributes like race, age, or gender (Elazar & Goldberg, 2018).Here, we do not have explicit protected attributes but rather latent hypothesis-only biases.Therefore, we use c Hypoth to predict the NLI label given only the hypothesis.To successfully perform this prediction, c Hypoth needs to exploit latent biases in h.
We modify the objective function (1) as To control the interplay between c NLI and c Hypoth we set two hyper-parameters: λ Loss , the importance of the adversarial loss function, and λ Enc , a scaling factor that multiplies the gradients after reversing them.This is implemented by the scaled gradient reversal layer, GRL λ (Ganin & Lempitsky, 2015).The goal here is modify the representation g H (H) so that it is maximally informative for NLI while simultaneously minimizes the ability of c Hypoth to accurately predict the NLI label.
AdvDat: Adversarial Training Data
For our second approach, which we call Adv-Dat, we use an unchanged general model, but train it with perturbed training data.For a fraction of example (P, H) pairs in the training data, we replace P with P , a premise from another training example, chosen uniformly at random.For these instances, during back-propagation, we similarly reverse the gradient but only backpropagate through g H .The adversarial loss function L RandAdv is defined as: where GRL 0 implements gradient blocking on g P by using the identity function in the forward step and a zero gradient during the backward step.At the same time, GRL λ reverses the gradient going into g H and scales it by λ Enc , as before.
We set a hyper-parameter λ Rand ∈ [0, 1] that controls what fraction P 's are swapped at random.In turn, the final loss function combines the two losses based on λ Rand as In essence, this method penalizes the model for correctly predicting y in perturbed examples where the premise is uninformative.This implicitly assumes that the label for (P, H) should be different than the label for (P , H), which in practice does not always hold true.1
Experiments & Results
Experimental setup Out of 10 NLI datasets, Poliak et al. (2018) found that the Stanford Natural Language Inference dataset (SNLI; Bowman et al., 2015) contained the most (or worst) hypothesisonly biases-their hypothesis-only model outperformed the majority baseline by roughly 100% (going from roughly 34% to 69%).Because of the large magnitude of these biases, confirmed by Tsuchiya (2018) and Gururangan et al. (2018), we focus on SNLI.We use the standard SNLI split and report validation and test results.We also test on SNLI-hard, a subset of SNLI that Gururangan et al. (2018) filtered such that it may not contain unwanted artifacts.
We apply both adversarial techniques to In-ferSent (Conneau et al., 2017), which serves as our general NLI architecture. 2Following the standard training details used in InferSent, we encode premises and hypotheses separately using bi-directional long short-term memory (BiLSTM) networks (Hochreiter & Schmidhuber, 1997).Premises and hypotheses are initially mapped (token-by-token) to Glove (Pennington et al., 2014) representations.We use max-pooling over the BiLSTM states to extract premise and hypothesis representations and, following Mou et al. (2016), combine the representations by concatenating their vectors, their difference, and their multiplication (element-wise).
We use the default training hyper-parameters in the released InferSent codebase. 3These include setting the initial learning rate to 0.1 and the decay rate to 0.99, using SGD optimization and dividing the learning rate by 5 at every epoch when the accuracy deceases on the validation set.The default settings also include stopping training either when the learning rate drops below 10 −5 or after 20 epochs.In both adversarial settings, the hyper-parameters are swept through {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.
Results
Table 2 task.The difference for AdvCls is minimal, and it even slightly outperforms InferSent on the validation set.While AdvDat's results are noticeably lower than the non-adversarial InferSent, the drops are still less than 6% points.4
Analysis
Our goal is to determine whether adversarial learning can help build NLI models without hypothesisonly biases.We first ask whether the models' learned sentence representations can be used by a hypothesis-only classifier to perform well.We then explore the effects of increasing the adversarial strength, and end with a discussion of indicator words associated with hypothesis-only biases.
Hidden Biases
Do the learned sentence representations eliminate hypothesis-only biases after adversarial training?We freeze sentence encoders trained with the studied methods, and retrain a new classifier that only accesses representations from the frozen hypothesis encoder.This helps us determine whether the (frozen) representations have hidden biases.
A few trends can be noticed.First, we confirm that with AdvCls (Figure 1a), the hypothesisonly classifier (c hypoth ) is indeed trained to perform poorly on the task, while the normal NLI classifier (c NLI ) performs much better.However, retraining a classifier on frozen hypothesis representations (c Hypoth , retrained) boosts performance.In fact, the retrained classifier performs close to the fully trained hypothesis-only baseline, indicating the hypothesis representations still contain biases.Consistent with this finding, Elazar & Goldberg (2018) found that adversarially-trained text classifiers preserve demographic attributes in hidden representations despite efforts to remove them.Interestingly, we found that even a frozen random encoder captures biases in the hypothesis, as a classifier trained on it performs fairly well (63.26%), and far above the majority class baseline (34.28%).One reason might be that the word embeddings (which are pre-trained) alone contain significant information that propagates even through a random encoder.Others have also found that random encodings contain non-trivial information (Conneau et al., 2018;Zhang & Bowman, 2018).The fact that the word embeddings were not updated during (adversarial) training could account for the ability to recover performance at the level of the classifier trained on a random encoder.This may indicate that future adversarial efforts should be applied to the word embeddings as well.
Turning to AdvDat, (Figure 1b), as the hyperparameters increase, the models exhibit fewer bi-ases.Performance even drops below the random encoder results, indicating it may be better at ignoring biases in the hypothesis.However, this comes at the cost of reduced NLI performance.
Adversarial Strength
Is there a correlation between adversarial strength and drops in SNLI performance?Does increasing adversarial hyper-parameters affect the decrease in results on SNLI?
Figure 2 shows the validation results with various configurations of adversarial hyperparameters.The AdvCls method is fairly stable across configurations, although combinations of large λ Loss and λ Enc hurt the performance on SNLI a bit more (Figure 2a).Nevertheless, all the drops are moderate.Increasing the hyper-parameters further (up to values of 5), did not lead to substantial drops, although the results are slightly less stable across configurations (Appendix A).On the other hand, the AdvDat method is very sensitive to large hyper-parameters (Figure 2b).For every value of λ Enc , increasing λ Rand leads to significant performance drops.These drops happen sooner for larger λ Enc values.Therefore, the effect of stronger hyper-parameters on SNLI performance seems to be specific to each adversarial method.
Indicator Words
Certain words in SNLI are more correlated with specific entailment labels than others, e.g., negation words ("not", "nobody", "no") correlated with CONTRADICTION (Gururangan et al., 2018;Poliak et al., 2018).These words have been referred to as "give-away" words (Poliak et al., 2018).Do the adversarial methods encourage models to make predictions that are less affected by these biased indicator words?For each of the most biased words in SNLI associated with the CONTRADICTION label, we computed the probability that a model predicts an example as a contradiction, given that the hypothesis contains the word.Table 3 shows the top 10 examples in the training set.For each word w, we give its frequency in SNLI, its empirical correlation with the label and with InferSent's prediction, and the percentage decrease in correlations with CONTRADICTION predictions by three configurations of our methods.Generally, the baseline correlations are more uniform than the empirical ones (p(l|w)), suggesting that indicator words in SNLI might not greatly affect a NLI model, a possibility that both Poliak et al. (2018) and Gururangan et al. (2018) do concede.For example, Gururangan et al. (2018) explicitly mention that "it is important to note that even the most discriminative words are not very frequent."However, we still observed small skews towards CONTRADICTION.Thus, we investigate whether our methods reduce the probability of predicting CONTRADICTION when a hypothesis contains an indicator word.The model trained with AdvDat (where λ Rand = 0.4, λ Enc = 1) predicts contradiction much less frequently than InferSent on examples with these words.This configuration was the strongest AdvDat model that still performed reasonably well on SNLI (Figure 2b).Here, Adv-Dat appears to remove some of the biases learned by the baseline, unmodified InferSent.We also provide two other configurations that do not show such an effect, illustrating that this behavior highly depends on the hyper-parameters.
Conclusion
We employed two adversarial learning techniques to a general NLI model by adding an external adversarial hypothesis-only classifier and perturbing training examples.Our experiments and analyses suggest that these techniques may help models exhibit fewer hypothesis-only biases.We hope this work will encourage the development and analysis of models that include components that ignore hypothesis-only biases, as well as similar biases discovered in other natural language understanding tasks (Schwartz et al., 2017), including visual question answering, where recent work has considered similar adversarial techniques for removing language biases (Ramakrishnan et al., 2018;Grand & Belinkov, 2019).
A Stronger hyper-parameters for AdvCls
Figure 3 provides validation results using AdvCls with stronger hyper-parameters to complement the discussion in §4.2.While it is difficult to notice trends, all configurations perform similarly and slightly below the baseline.These models seem to be less stable compared to using smaller hyperparameters, as discussed in §4.2.
(a) Hidden biases remaining from AdvCls (b) Hidden biases remaining from AdvDat
Figure 1 :
Figure 1: Validation results when retraining a classifier on a frozen hypothesis encoder (c Hypoth , retrained) compared to our methods (c NLI ), the adversarial hypothesisonly classifier (c Hypoth , in AdvCls), majority baseline, a random frozen encoder, and a hypothesis-only model.
Figure 2 :
Figure 2: Results on the validation set with different configurations of the adversarial methods.
Table 1 :
Poliak et al. (2018) development set thatPoliak et al. (2018)'s hypothesis-only model correctly predicted as contradictions.The first line in each section is a premise and lines with are corresponding hypotheses.The italicized words are correlated with the "contradiction" label in SNLI techniques include (1) using an external adversarial classifier conditioned on hypotheses alone, and (2) creating noisy, perturbed training examples.In our analyses we ask whether hidden, hypothesisonly biases are no longer present in the resulting sentence representations after adversarial learning.The goal is to build models with less bias, ideally while limiting the inevitable degradation in task performance.Our results suggest that progress on this goal may depend on which adversarial learning techniques are used.
Table 2 :
reports the results on SNLI, with the configurations that performed best on the validation set for each of the adversarial methods.Accuracies for the approaches.Baseline refers to the unmodified, non-adversarial InferSent.
Table 3 :
Indicator words and how correlated they are with CONTRADICTION predictions.The parentheses indicate hyper-parameter values: (λ Loss , λ Enc ) for AdvCls and (λ Rand , λ Enc ) for AdvDat.Baseline refers to the unmodified InferSent. | 3,396.4 | 2019-07-09T00:00:00.000 | [
"Computer Science",
"Psychology"
] |